Database Technologies and Information Management Group

The DTIM research group in Universitat Politècnica de Catalunya (UPC) conducts research in many fields of data and knowledge management, with emphasis on big data management, NOSQL, data warehousing, ETL, OLAP tools, multidimensional and conceptual, ontologies and services. DTIM is a research subgroup of the Integrated Software, Services, Information and Data Engineering group (inSSIDE) research group whose members belong to the ESSI and EIO.

VisionDemocratize information management and analysis to solve current societal challenges.
MissionGain and create knowledge on information management and analysis to contribute to global progress and development by (1) leading and contributing to local and international research and innovation projects, (2) disseminating scientific and technological knowledge, and (3) building intersectoral partnerships.
Nobody educates anybody -nobody educates himself-, men educate each other under the mediation of the World.
Paulo Freire. Pedagogia del oprimido. Montevideo: Tierra Nueva, 1970.

Latest News (see all)

Latest Blog Posts

Tips and tricks on writing experiments


Presenting sound and convincing experimental results is a must in any paper. Besides providing practical evidence that your assumptions are correct, writing experiments is a great way to find bugs and make you rethink an algorithm that made sense in the blackboard. The objective of this blog post is to provide some tips and tricks on the process of writing experiments from my own personal experience. Naturally, any feedback is welcome so don't hesitate to drop an email if you have other interesting recommendations.

Automated Machine Learning and Its Tools

Most of us are well aware of how tedious, time-consuming and error inclined it could be to design a good machine learning pipeline for specific data or problem. In order to achieve such machine learning (ML) pipeline, a researcher has to perform extensive experiments at each stage. 

The success of a designed machine learning pipeline highly relies on the selection of algorithms at each level. At each of these steps, the expert has to decide on the selection of the appropriate algorithm or set of algorithms along with their parameters (hyper-parameters) and parameters for the ML model. For a non-expert person, it is tough to make such selection which limits the use of ML models or results in poor predictive outcomes. The rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge.

The idea behind Auto-ML (automated machine learning) is to make the selection of best algorithms (suited for the given data) at each stage of ML pipelines easier and also independent of human input.

Excuse me! Do you have a minute to talk about API evolution?

If you work in an IT-related field, it is impossible not to have heard about APIs. Even if you may have not used them, you have benefited indirectly from them.  We can simply describe APIs as a means of communication between software applications. To better explain how they work, we can make an analogy with the Apis (Genus of honey bees) role. One of the most important things Apis species do is plants’ pollination. By transporting pollen from one flower to another, they make possible plants reproduction. In API world, plants are software, pollens are request/response between software, and Apis are APIs. 

Our favourite tweets