Can We Finally Put An End To Siloed Data In The AEC?
Note: This is Part 2 of our special podcast series, “The Art of the Possible.” To listen to Part 1: Solving The Challenge OF Siloed Data In The AEC, click here.
The world is more connected than ever before, and with the tools and APIs available today, the ability to solve the challenge of interoperability has finally become a reality. Join ProjectReady CEO Joe Giegerich & Head of Product Development Shaili Modi Oza as they discuss their approach to solving the challenge of interoperability with the integrated data environment or IDE during Part 2 of “The Art of the Possible: Exploring What’s Possible Today: Creating the Integrated Data Environment.”
An IDE brings together different systems, platforms, and common data environments (CDEs) used by different internal and external team members on a project. In the AEC space, commonly used CDEs include Procore, Autodesk Construction Cloud, and Microsoft 365. The goal of an IDE is to prioritize the project, regardless of the systems in play. In doing so, an IDE makes it possible to seamlessly integrate workflows, data, and people to facilitate a single source of truth.
Discover How To Eliminate Siloed Data Today
- What’s possible today for those looking to connect systems to better manage an AEC project.
- The concept of an Integrated Data Environment (IDE) and its benefits.
- How APIs promote system integration and why more companies should get involved.
- The challenge and importance of managing governance and security across systems.
Note: This is Part of a special podcast series on “The Art of the Possible for AEC Tech”. You can click here to check out the rest of the series and featured blogs.
Sign Up For Updates
Contact Us Today For A Free Demonstration!
Hi, everybody. I want to thank you for joining us today on yet another one of our podcasts. I’m Joe Giegerich, founder and CEO of Project Ready. And with me is Shaili Moda Oza, who is the CTO and head of development here at Project Ready.
Today we’re going to be picking up on our last podcast, which was entitled The Art of the Possible. The Art of the Possible is about with modernity, with cloud, with APIs, what is possible in terms of improving efficiency, visibility, and control over the disparate places where project information may reside.
Before we get into that, though, I do want to tell everybody that these will be a regular series. Some of our upcoming podcasts, one’s going to be around M365, generally. We’re going to focus on SharePoint, what that means for the AEC and how to handle security in that context, email, and the rest of that stack. That will be a series coming up next.
Also, one thing that we’re really super excited to do is a webcast called AI: Garbage in, Garbage Out. The basic contention of this is a lot of buzzwords, but what does it really mean practically? We’ll be joined with friend and colleague, Jeff Walter. And Sala Eckhart, also another colleague that we’ve enjoyed the company of over the years.
Without much more ado, let’s just talk about, the art of the possible leads to the creation of an integrated data environment. It’s the end goal that you want to get to. If it’s now possible to start to bring information together so that you can have a single source of truth and be able to take action on things. All that stuff.
Well, that requires the creation of an integrated data environment. Now, an integrated data environment, or an IDE, we all know what a CDE is, or Common Data Environment. An integrated data environment is that process over all these different systems to integrate set data and, in our case, in a way that makes it very easy to collaborate and control and govern. So, Shaili, if you would, what is the integrated data environment?
Yeah, I think as you mentioned, Joe, at a very high level, it is the ability to bring together different CDEs as a part of the project. Different team members work in different CDEs. They’re using either a combination of multiple CDEs where documents are stored, the different workflows across systems are being used, and something is needed at the project level, which brings all of this together. I think at a very high level, it’s keeping the project at the center of it all and bringing the workflows, and data, and people from different CDEs together.
I guess we’re remiss for our audience in not defining a CDE. I should never assume these things. Common data environment. Some examples of which that there are AEC-specific would be like Procore, Autodesk cloud.
But CDEs, at least to my mind, I don’t know if they’re technically regarded this way, I also include M365. You can include box in that mix if you wish, because these are data environments that are common.
Yeah, definitely. With M365, a lot of organizations use it. Even if you are using teams and storing documents there, it is a CDE where you’re storing your documents associated to the project in there. Yeah, all of these would be considered as CDEs. Different team members are using these based on what and how they’re interacting with the project.
That’s the challenge. My joke is common data environment, common to who? Right? It’s common to that FTE, or that subject matter expert, or that skillset that needs very bespoke tooling, as a general rule.
Everybody needs Microsoft 365. Frankly, it’s where most of the business world lives, but it’s common only at a particular phase in a project. It’s only common to as it relates to what your contractor is working in, or your GC is working in, like that. Is that a fair statement, correct?
Yeah, yeah. Definitely agreed.
Of course, if common means it’s only common to you, then it’s not common in a ubiquitous fashion across all the players, and phases, and everything else in a project. Hence, the integrated data environment.
As Shaili pointed out, this is to bring those CDEs in and, to some fashion where you can get a better handle on managing that information, and the secure governance around the collaboration. With that, that was the high level on an IDE.
Last week’s podcast was on The Art of the Possible and was very directed toward the wide availability of APIs now. Shaili, if you could take us down to another level, a little bit closer to the ground on the definition of an integrated data environment as it relates to the tooling and approach?
Yeah, definitely. Since you brought up the open APIs, all the different common data environments that we mentioned, be it Procore, Autodesk Construction Cloud, Microsoft, everybody is moving in the direction where they have these open APIs. Autodesk uses their own APIs and Procore M365.
But, essentially, everybody provides RESTful APIs, which have a similar format. It makes all of the content in these systems that much more accessible, because they recognize this need that just having the data in one environment is not enough at the project level.
They are providing end users with the ability to interact with the data using APIs. But, at an organization level, it becomes difficult to do that level of research on what data needs to be brought out, how that needs to happen, so that is the difficulty.
But all of these common data environments now make these APIs available. At the IDE level, it’s a matter of defining what the different data points are across these systems, and how they can be brought together in a meaningful way to that would make sense to manage the project, essentially.
APIs have been around for quite some time. One of the things that you’ve educated me on over time is the difference between APIs. APIs, if you would, a modern API. I’m trying to lead the witness here a bit, but what is the distinction between older systems that have APIs, versus the newer version, or the newer generation of APIs that are widely becoming available?
With definitely the newer generation of APIs, they all follow a very similar format in terms of starting with authentication. They all have a standard, or TBA, which uses access tokens and refresh tokens. It all makes accessing and interacting with this data very much secure.
They follow that standard format where this is how you would connect to the APIs. And the way all of the APIs are formatted and they return the data has also become that much more consistent.
They all follow a JSON format, which gives us the data in a particular format, which is easy to interpret, essentially. By having that as a standard, it just makes that much easier, that all of these systems have that kind of a similar format in which the data is returned.
The newer APIs, they are built in such a way that they are much faster on the data that it returns to us. It’s quicker. Even if we make a lot of calls, they’re able to handle that kind of interaction with all of these systems.
They can scale on the enterprise level.
One thing I will pick up on. But, yeah, APIs used to be, yes, I can get to the data or pull it out. APIs, to my semi-technical mind at this point, I haven’t consulted for a long time, but APIs were a great way to pull information into, say, this talk to do analysis.
But one thing that you always point out, I’ll always ask Shaili, here’s this other platform we’re hearing in the industry that people would like to be integrated into our process platform, Project Ready. The first thing, if we can’t, the answer I’ll get from you, Shaili, is they don’t have webhooks, right?
Just explain, because it’s a mixed audience, it’s web REST, RESTful APIs, they’re fairly new. Is that fair? Or at least their availability is becoming larger, and what does that mean as it relates to an integrated data environment?
Yeah, definitely. The webhooks are essentially the APIs that get triggered automatically in case of an event happening, so they are event-based APIs. Why they’re so important is because when we are building an IDE and bringing these systems together, those kinds of APIs help us trigger different workflow processes. We can’t keep scrolling the data and keep calling all the APIs to see what’s going on and what has changed bandwidth break, right.
Yeah. It doesn’t scale at an enterprise level if you are trying to work with a lot of data. But with the event-based APIs like this, they just get automatically triggered. We can keep a track of that, and then use that, essentially, to access the data and the information. So we only get the content that has been changed or updated. That just helps us work with those workflows that much better.
I think that, it just strikes me now, that the big departure that allows you is it’s based upon an event. Taking a step back when, in olden times, probably five, 10 years ago when you would just do this bulk API call and dump it into a database, and do analysis like in BizTalk.
All that stuff was dashboard-driven, or just to send information on a trigger off to an ERP system and the like. It was a way to get monolithic data, if you like to wear hipster waders and go for it. But the event-based API, right?
Means that just it’s inherent. There has to be some business logic, some event. It gives context to the call. I need to know how many RFIs were processed on this date, right?
So with that, and this is not to jump ahead, but it’s somewhat where I want to go on our call about AI: Garbage In, Garbage Out. People go, “Oh, I’ll apply AI to stuff.” Okay, how’s the taxonomy? Is there any business logic as to what you’re trying to query ultimately?
Those events will trigger, will be based upon business rules, based upon business needs, based upon the needs of the people who have to collaborate on a certain set of data that changes or is created at a point in time. That’s revolutionary when you think about it in the way that aggregated databases come together.
One of my older phrases is, before man, there was no data early on in our ancestry. You got to computers, the challenge was to get enough data. Then how do you store all that data? And, now, how do you make sense of it, right?
And through things like referencing, to our earlier cast, scalable taxonomy, understanding metadata, that is then given intelligence of a different type based upon an event. That’s the big game changer. That is what’s going to allow the industry to continue to mature toward the journey, toward an integrated data environment.
I think that’s great.
But, again, you do agree with all that, right? That’s the big difference is the event.
If you didn’t have event-based activities, what could you do within the context of what we bring in the way of an integrated data environment? Not a heck of a lot.
Right. Yeah, definitely. Because, as we said, it wouldn’t be scalable to just scroll through all of that data. If you are just getting data for the purpose of dashboarding and bringing the content in, that would be all that we could do without that.
For clarity, it wouldn’t be the dashboard that we have. Because I always like to differentiate that the dashboard in Project Ready is really not a dashboard, it’s an action center. Because you have these events changing quickly, informing you, and the way our dashboard works, it’s not to look at static information. It’s to inform you near real time spanning systems, so that you can make a truly informed decision.
Then the final thing I’ll say on it is that’s also what I think is a challenge in technology world is to start to break away from that concept that all you need, that all your responsibility is, is to aggregate data.
It’s context. And without context you can’t drive process, and without process how are you going to handle the governance? So I think I’d like to pivot that. Based upon an event, this happens. Now you have to do want in that integrated data environment, make sure the right people are getting to it. So, if you will, you talk a little bit about how the development of an integrated data environment, the challenge around the governance and, basically, what opportunity that represents?
Yeah, I would say definitely talking about security and governance. The challenge would be that we are bringing all of this data from different systems together in one integrated data environment to make sure that all of that data is accessible to the correct team members, because it all needs to be covered and then security trimmed.
In different CDEs, there are different team members accessing and updating this data. How do you make
sure that when it comes in into an integrated data environment that it’s all properly covered and only the correct team members have access to it?
I think that’s very important at the project level, that it’s not all open out there for everyone to see. It needs to be properly governed, that who has access to what. I think that’s something that could be a big problem if it’s not handled properly.
On a certain level, it almost strikes me it’s almost like having master metadata. That if you have discreet roles within the IDE that you can then map to those other governing individuals, literally people, and logins, and stuff. That’s how you really start to get that go, because then you’re just going to respect everybody’s individual privileges on these other CDEs that we’re applying from.
Yeah, definitely. Even going back to what you mentioned about our dashboard where it’s not just bringing the data in, it adds that level of security, as well. We talked earlier on about how we handle authentication with all of these APIs.
All of these APIs, they give us this unique access token. Anytime a user comes in into the IDE, it uses that unique access token even across the different CDEs to make sure that, if you take an example of broker submittals, or Autodesk issues, it will use their access in Procore and Autodesk to make sure that they are accessing only the items, and things, and features that they would have access to, and they just don’t see everything all at once. It does respect that inherent security of the systems, as well.
Then lay security on top, because then there’s the secure access to those systems that are coming up onto the project homepage, dashboard, call it what you want, that they can then access it. But then we have our own layer on top of that, as to what you can access and information that will surface it, so it becomes a hierarchal governance at that point.
One of the things that it strikes me, even down to there are a couple of things that we’ll get good grades from on our customers, and some of the simple things, doing what you said and what we do. Well, there’s an efficiency to logging in once to one application, not having to constantly log into a myriad of them.
Then, in terms of efficiency, down to being directed to the relevant data and content on a couple of clicks. One click, everything together in context. So how do you think, and have you seen with our customers, that an integrated data environment drives better efficiency and helps reduce risk, better decision-making? Any comments you’d like to make there?
Yeah, I think definitely something as simple as what you mentioned to get to the correct project and the correct documents that you’re looking for across so many different systems, it makes that, it just saves so much time for our customers and end users to just have that right on the dashboard, all of the links and URLs to where the data is and the different projects.
It just makes it so much easier to get to all of the data that they’re trying to get to. It definitely increases the efficiency a lot. Combined with the different workflows that we have where we are trying to bring content in and send content in from all of these different systems, just by staying in the interface of the IDE and still having access to the connected CDEs just increases efficiency greatly.
Yeah, and with an integrated data environment, certainly, our platform also involves the synchronization of documents and the like, so webhooks make it possible to get information near real time or real time.
If everybody’s looking at the same piece of content with the same time and date stamp, if you can get to that underlying system… One of our larger sponsors, he goes, “Do you know much time that one chain link saves me every day?” Because he’s in 10 projects, each with four systems. Now, just go right to the project cart, go right to where you got to go, so those little things add up.
A bit of a shameless plug, if you go to ww dot, I don’t even know why I should say that anymore, but project-ready.com, we have a whole bunch of ROI calculators. I know the thing to do on your website, but they’re pretty profound.
If you have 100 people doing the same task 10 times a day that takes them 10 minutes, and you make it one minute 10 times a day, you saved a lot of money. So the ability to interoperate, move, and massage all that data quickly is one of the major benefits of an IDE. Of course, combining that with the governance, the secure access, better collaboration.
Some examples of this, you have to work with people in different companies and different consultancies. It’s just how it’s going to go. So if you know where all the information is, it’s all tidy, now you can start to effectively manage other things.
What form and fashion does better collaboration come into? How do you deal with email? We have these four or five things that everybody has a problem with that we fix. And email, even though looking at it from a technologist perspective, you go, “Oh, it’s email.” But it’s a pain to manage.
Once you have an IDE, you get benefit of managing all that email, all those different documents in context, across people with a single view. And, certainly within our own product, the ability to set these projects up in tandem using the same set of descriptors. That all facilitates what is really the most important thing in tech and something I will always stress.
When I have very great conversations with my son and we’ll talk politics, or we’ll talk society, I go, “But just follow the money.” There has to be a return on investment. That which saves money, that which creates capital wins, so an IDE does just that.
It means that everybody can work together efficiently. The data, it makes sense in its context and with the right taxonomy can scale endlessly. Shaili, I don’t know if there’s anything you would like to add to that but, otherwise, folks, I’m thinking it’s probably a good time to wrap. We’ve been at this for a little bit. But any other closing comments, Shaili?
No, I think you brought it all nicely together, Joe. And, yeah, I think we can end here.
All right. One more quick reminder, we’re going to be doing a lot of podcasts on the go forward. Again, our upcoming podcast, please do join us if you find any value here ,is going to be around M365 in the AEC, focusing on SharePoint and best practices. We’ll focus on email. Then, of course, the one I’m really looking forward to is the AI: Garbage In, Garbage Out.
So, till next time, everybody, I really do appreciate it when you come in to listen here. Hopefully, it drove some interesting thoughts for you and look forward to having you come by the next time. I’m Joe Giegerich. Thank you, again.