Projects with Labs and applied cases

We turn technical challenges into functional prototypes (PoC/MVP) with measurable impact on productivity, quality, deadlines, security, and ESG. Our Labs bring together teachers-practitioners, specialists from your company, and GUTEC talent to work with the exact workflows, standards, and software you use every day.

Labs

Value proposition .

From training to operational results. A Lab isn’t a course—it’s a guided project where you design, prototype, and validate a solution to your problem. Each sprint produces reusable deliverables (models, plans, scripts, dashboards, procedures) that can be integrated into your CDE/PMO.

Mixed team with shared ownership. We combine teachers-practitioners (who work on site, in plants, or in control centers), your technical team (who knows the real constraints), and advanced students (production capacity with supervision). The mix accelerates and lowers risk: experience + hands + focus.

Professional stack and acceptance criteria. We work with BIM (Revit/Navisworks/IFC), GIS (QGIS/ArcGIS), FEM (ETABS/SAP/Robot), hydraulics (HEC-RAS/HMS/SWMM), MEP/Energy (ETAP/DIgSILENT), PM (Primavera/MSP), analytics (Power BI/Python), OT/SCADA/IEC-61850, Fire Safety, and Data Centers. Deliverables are accepted based on clear technical criteria (tolerances, standards, QA/QC, reproducibility).

What a Lab does and why it's different.

Agile methodology with PDCA. 2–3 week sprints: plan → build → verify → act. Each demo collects feedback from the Sponsor and finalizes acceptance criteria. If something does not add value, we remove it from the backlog: zero vanity work.

End-to-end compliance. From day 0, there are NDAs, anonymization or synthetic data, IP policy (who owns what), EHS permits if on-site, and decision logging (minutes, changelog).

All Labs can be combined (e.g., BIM + 4D/5D + ESG for hotel retrofits; OT/SCADA + substations for utilities).

Types of Labs

Purpose: To standardize coordination, reduce clashes, and link the model to time and cost.
What we do: We define the BEP, configure the CDE (names, folders, permissions, transmittals), model and refine MEP, apply clash rules, and generate 4D/5D models with BC3/QTO data extraction.
Deliverables: BEP + templates, ready CDE, validated IFC package, before/after clash report, 4D/5D plan, and earned value (EVA) dashboard.
Typical impact: –25–40% reduction in coordination hours; –20–35% reduction in MEP rework; +10–15 points in PPC.

Purpose: To compare alternatives for alignment, drainage, pavement, and capacity, taking into account risks and maintenance.
What we do: We model alternatives, drainage hydraulics (HEC-RAS/HMS/SWMM), GPR surveys if applicable, pavement parameters, and analyses of railway/ERTMS or ITS interfaces.
Deliverables: comparative report, plans/MDT, drainage and pavement, multi-criteria matrix, risk and operations plan.
Impact: better-informed CAPEX/OPEX decisions and reduced claims due to interface issues.

Para qué: elevar eficiencia, seguridad y disponibilidad.
 Qué hacemos: cálculos de cargas, selectividad, integración IEC-61850, pruebas FAT/SAT, commissioning de data center (Tier/UPTIME), operación y redundancias.
 Entregas: esquemas unifilares, coordinación selectiva, plan de pruebas, protocolos de commissioning, manual O&M inicial.
 Impacto: ↓ incidencias; ↓ tiempos de indisponibilidad; cumplimiento PCI/REBT y Tier.

Purpose: To mitigate flooding and improve urban resilience.
What we do: We model watersheds, extreme events, and flood zones; we design SUDS (retention basins, swales, green roofs); and we develop maintenance plans.
Deliverables: Calibrated models, risk maps, and a prioritized list of measures with hydraulic ROI.
Impact: Evidence-based investment prioritization; reduction in expected damage.

Purpose: to diagnose pathology and define viable reinforcements.
What we do: inspection/testing (where applicable), FEM with load hypotheses, FRP/post-tensioning, underpinning, instrumentation, and back-analysis.
Deliverables: verified model, details, auscultation plan, construction procedure.
Impact: greater structural safety, cleaner and more traceable works.

Why: to move from prescriptive to performance-based and optimize compartmentalization/evacuation.
What we do: simulations, smoke/temperature calculations, evacuation times, and fire safety strategy.
Deliverables: performance report, compartmentalization/media plans, inspection plan.
Impact: robust compliance, more efficient and defensible solutions.

Purpose: To strengthen the operation of critical assets through industrial cybersecurity.
What we do: Asset inventory, hardening, segmentation, patch management, incident response, redundancy.
Deliverables: Risk matrix (IEC-62443), runbooks, hardening checklist, testing plan.
Impact: Fewer incidents, zero critical incidents, lower MTTR, and improved compliance.

Purpose: To integrate carbon footprint/LCA and green requirements into decision-making and tenders.
What we do: Carbon/water baseline assessment, material selection, efficiency measures, responsible procurement, and reporting.
Deliverables: LCA reports, ESG tender package, monitoring plan.
Impact: ↓ tCO₂e and operating costs; competitive ESG score.

Methodology — how we move forward sprint by sprint.

Phase 0 — Discovery (1–2 weeks)

• Kick-off meeting with the sponsor, including scope, NDAs, EHS approvals, and definition of success. • Data collection and review of internal guidelines (naming conventions, RFI, claims). • Deliverables: Project Charter (objectives, KPIs, initial risks, communication plan).

Phase 3 — Demo Day (1 week)

• Technical presentation to the sponsor/project owner: scenarios, results, limitations, and next steps. • Delivery of the reproducible package: models, scripts, reports, templates, and implementation guide.

Phase 1 — Case Design (1 week)

• Hypothesis, deliverables map, acceptance criteria, and tests. • CDE setup with role-based permissions and QA/QC checklist. • Output: Sprint plan and RACI matrix.

Phase 4 — Rollout & ROI (30–60 days)

• Tracking of KPIs (hours, rework, PPC, ESG, incidents). • Final adjustments and adoption plan (internal champions, light audits, train-the-trainer).

Phase 2 — Sprints (4–8 weeks)

• Each sprint defines technical user stories (e.g., “MEP clash rules” with tolerances). • Biweekly demos with feedback and documented decisions. • Continuous QA: model audits, validations, change log.

Roles and governance: who does what (simple RACI).

Frequency: optional daily stand-ups, weekly review meeting, sprint demo, meeting minutes with agreements/risks, and a visible dashboard (progress and roadblocks).

Deliverables: scope and quality standards.

Documented assumptions and parameters; tolerances and QC criteria; IFC/BC3/CSV export files with validation data.

WBS, relationships, resources, costs; S-curve and earned value; dashboards ready for committee review.

Single-line diagrams, selectivity, IEC 61850 (naming, SCL), FAT/SAT test plans, checklists, and incident runbooks.

Comparative reports, multi-criteria matrix (technical, cost, risk, ESG), risks with mitigation strategies, and responsible parties.

Commissioning, QA/QC, incident response, hardening, change management; ready for your PMO.

Version control, scripts, anonymized datasets, a step-by-step guide, and a decision log.

KPIs .

Each KPI includes a baseline, source, frequency, and person in charge; it is reported on a dashboard, and supporting evidence is attached.

Productivity (hours/deliverable): hours spent before vs. after; target: –20–40%.

Quality (nonconformities): reported/resolved; target: –20–35% nonconformities per cycle.

Deadline (PPC/reliability): % of commitments met; target: +10–15 points.

Costs (exchange orders): number and amount in euros per month; target: –10–20%

Case studies — how they are applied in practice.

Problem: High energy consumption and comfort complaints. Lab: Coordinated MEP model, measurements with ROI (insulation, free cooling, heat recovery, BMS), bid package with ESG criteria. Result: –18% estimated energy consumption, optimized CAPEX, and consistent documentation.
Problem: Schedule deviations and cost overruns Lab: 4D/5D, EVA, Last Planner board, and claims playbook. Result: PPC +12 pts; –15% avoidable change orders.
Problem: Recurring flooding Lab: Models, risk maps, and a prioritized portfolio of SBN/SUDS. Result: Technical basis for tendering phases and measuring resilience.
Issue: Response times and OT risks. Lab: Inventory, segmentation, hardening, SCL, and test plan. Result: ↓ MTTR, baseline compliance, 0 critical incidents.
Problem: structural defects and reduced capacity. Lab: diagnosis, FEM analysis, and reinforcement details; inspection plan. Result: a sound solution and a project with lower risk.
Issue: Intermittent cooling failures. Lab: Integrated testing, redundancy, and O&M dashboard. Result: Thermal stability and reduced alarms.

Data, confidentiality, and intellectual property — no surprises.

  • NDA signed prior to discovery.
  • Use real data only when absolutely necessary; give preference to anonymized data or equivalent synthetic data.
  • Access based on role and storage only on authorized servers (no USB drives or personal email accounts).
  • Default PI (standard recommendation):
  1. The company owns the deliverables (deliverables, models, custom-built scripts).
  2. GUTEC holds the intellectual property rights to its methodologies and teaching materials.
  3. If a GUTEC template or base script is used, a non-exclusive license is granted to the company.
  • Publication: only if both parties agree (white paper/case study); otherwise, everything remains confidential.
  •  

Frequently Asked Questions (Extended).

Both: guided consulting with the delivery of deliverables and knowledge transfer to your team.

Yes, that's the ideal approach. If the data is sensitive, we anonymize and segment it.

This is also a success: we document limitations, alternatives, and a suggested next sprint. What matters is evidence and an informed decision.

GUTEC does not replace professional expertise; we provide the technical foundation so that your team or consultants can sign off on projects in accordance with local regulations. Templates and metrics for the PMO.

Yes, as a support tool (QA, data extraction, visualization) used responsibly (traceability and limits). The final technical decision is made by a human and is justified.

Only with permission and with anonymization and minimal access. There is always faculty supervision and controlled repositories.

Yes: We provide a reproducible package, a checklist, and a train-the-trainer program to help you scale up.

Scroll al inicio