Jump to content

Block Cat

  • Content Count

  • Joined

  • Last visited

  • Days Won


Block Cat last won the day on March 16 2018

Block Cat had the most liked content!

About Block Cat

  • Rank

Blockchain Information

  • Blockchain Developer
  • Blockchain Investment

Recent Profile Visitors

510 profile views
  1. Smart cargo insurance — Solution for the Odyssey hackathon 2019 ⁉️ The problem Dumb containers Only recently an accident occurred on a cargo ship near the coast of The Netherlands. A total of 270 cargo containers were thrown overboard into the Sea. A lot of questions have risen since then related to the traceability of these containers. After the accident, no-one really knew which containers fell into the ocean, what was in them and definitely not their location. These accidents still occur regularly. A similar story is the one about Porsche. They had to restart the production of 911 GT2 RS because the ship carrying 4 of them sunk. Flat rates Currently most cargo are insured against a flat rate depending on the weight of the cargo. They don’t even account for the value. If you’re shipping a container full of sand vs a container full of phones, if they weigh the same, they are insured for the same amount. 👀 Our solution The shift We see a shift from cost management to risk management. By focussing on reducing accidents and related damage, we can save costs and avoid unnecessary delays for both the insurer and the insuree. We can achieve this by providing risk-mitigating advice to the insuree. Together with enriching data like adding near-real-time ship tracking using AIS data from Spire, ( and later other providers ) we can more accurately calculate risk scores and give better advice. Smart containers By registering hand-overs, we can bring transparency to the current holder of goods between different carriers. Adding smart sensors to these dumb container will allow to easily capture detailed information about the goods across the hand-overs? All sensor data will be verified using a blockchain which will make sure these carriers can be held liable if the temperature or other sensor data exceeds the boundaries for too long. Combined with the other enriched data, all the parties on the platform get an accurate and trustworthy overview of the status of a certain transport. Dynamic premium Flat rates don’t work for all types of cargo. This is why we think that with all the data we receive from different sources ( eg. Spire, weather, container sensors, the characteristics of the goods, characteristics of the transport), we can more accurately create a risk profile. Based on this more accurate risk profile, better and more personalized advice can be given to the insuree. This advice can go from obligated to strongly advised to optional. Based on the type of advice and whether or not the insuree follows this, premiums can be adjusted as an incentive. This extensive profile allows for insurees who frequently get good risk profiles to get lower premiums because they are reliable. Liability & faster claim resolution By using blockchain to verify captured data and current holder of the cargo, we can very quickly see which party is reliable when something happens ( eg. Who had the container with bananas when the temperatures exceeded the boundaries for 1 hour. ). Since this data is trustworthy, when claims are being made, claims can be received and processed quicker and with fewer disputes. The quicker a claim gets created, the quicker other parties in the chain can adapt. This creates value for them because they immediately know if a certain transport will not be passing by them and give them the opportunity to find other transports instead. Architecture Our solution was pretty simple. We only had 2 developers in our team, so there wasn’t a lot we could do in 2 days since we also had to come up with something to build within this timeframe. Our stack TVM the cargo insurance company, who sponsored and led our track during this hackathon, already has a consortium of partners. Given that and the idea that we want to build an open platform where competition might join in the future and that all this data should be protected and only be read by specific parties. We chose to use Hyperledger Fabric as our blockchain technology of choice. Our frontend consisted out of React and our backend out of Nestjs (enterprise nodejs framework). The Application transporter viewA transporter would be able to see detailed information about the status of its goods. They will also see the exact position and estimated arrival date using the APIs that were provided to us by Spire Maritime. Using their AI, we could also to a certain extent predict the future position of the vessel. Besides this, they will also be able to monitor the sensor data of the container. Here, this will be the temperature measurements, tilt, the electronic seal and shock detection, but this can be extended with other sensors. On the top right, the transporters will see their dynamic premium. Advice tailored to them with an incentive to lower the risk, and thus a receive a lower premium or the other way around when ignoring this advice. An insurer will be able to see a similar view. But instead of seeing advice, they will see the risk analysis generated by the data fed into the platform. Based on this risk analysis, they can give better advice on risk-mitigating actions. Challenges Open platforms and especially platforms where competitors might join are hard to set up. Maersk is struggling with this as we speak. We don’t want to be another Maersk-IBM ( Tradelens ) tracking containers. Our focus lies on risk management, faster claim resolution and maybe even faster hand-overs by allowing for insurance to be taken over from a previous carrier. Tracking the containers are only a small part of the platform since they serve as a way to identify the carrier currently liable in the event of a claim. Tradelens could even be a part in our solution to capture even more information on the specifics of a certain container and their contents. They mainly capture data of what is happening with the container at the port. We received a very good question at the hackathon: What does our platform have that makes sure we don’t have the same struggles as Maersk? — We believe that the incentive will be big enough for other competitors to join since they can potentially save costs by mitigating risk and provide a better service by giving advice. Claim resolution times will also be greatly reduced, which saves everyone in the processing time. 🛰 How did we use Spire maritime’s data? We used the Spire Sense Cloud Vessel API in combination with Predict AI, their machine-learning feature capable of predicting positions up to 8 hours in the future. For the demo, we filtered the vessels down to those sailing under the Dutch flag for demo purposes. These ships get linked to transports that we have in our application. We can accurately position any of the ships we are tracking and sometimes even predict their future position up to 8 hours. This data can be used in our calculations for risk management. We can combine this data with other data like weather, in fact Predict AI already integrates weather into its predictive analytics. Using this, if we see a storm forming, we can give advice to the ship to take a different route or to wait in port to prevent damage or loss of cargo. Another example could be to avoid peak hours at the destination port because we know where ships are and where they will be. We also received access to Spire’s Enhanced Vessel Data. Since we didn’t plan out a large user flow to fit our timeframe, we didn’t get to use this. But this database of technical details contains things such as specific dimensions and cargo capacity but also engine manufacturers,….. Our platform could later use this data to better calculate risk profiles for vessels since it will give us better insight into the value of the cargo and which vessels are more prone to delays due to engine failures or other conditions. If you would like to try the API yourself, you can find it here or you can contact them if you have specific questions related to their AP Every Dutch ship in The Netherlands (Spire Sense Cloud Vessels API & MapBox)https://medium.com/media/36bbdc2b398756075b5a16011c8090bc/href Smart cargo insurance — Solution for the Odyssey hackathon 2019 was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  2. Odyssey hackathon 2019 — Short recap This was our second time attending Odyssey (formerly known as Blockchaingers) hackathon. Last year we managed to win the track “Digital Nation’s Infrastructure” with our self-conscious house solution. This year, the hackathon was not only focussed on blockchain. AI also played a big role. Because of this and to let more teams participate, the team limit had been reduced to 6 people. We decided to split up into 2 teams. This gave us the opportunity to invite Kurt Janssens from Brainjar to focus on AI and to spread our knowledge and ambition across multiple tracks (challenges). 💻 The hackathon Panorama of “The Grid” or the hacker spaceThe Odyssey hackathon consisted out of 100 selected teams this year, spread over 12 different tracks: Fossil free future, Nature 2.0, Digital citizenship, Rethink retirement, International travel, Crisis and disaster management, Inclusive banking, Feeding the future, Scaling ecosystems, Digital nations infrastructure, Future of cargo insurance, Tokenizing the Odyssey ecosystem. We chose the tracks “Feeding the future” ( sponsored by Nutreco ) and “Future of cargo insurance”( sponsored by TVM ). The track sponsors brought their industry knowledge and we brought our blockchain tool belts. This came together to create 2 awesome solutions. 📚 The preparation Compared to last year, we actually did some preparations, besides setting up our boilerplates at least. It’s a hackathon so making everything up-front is no fun. But we did do some initial analysis and research on what the problems might be and how we could tackle them. It doesn’t matter how much analysis you do upfront, you only know for sure when you can actually speak with the track sponsors and Jedi’s ( a person with certain expertise walking around at the event) to create a solution perfectly fit for them. We recently came aware of Spire, a space-to-cloud data & analytics company which has a constellation of 76 satellites in orbit. They collect AIS data for maritime domain awareness, ADS-B data for aviation tracking, and weather data. Since one of the challenges is insuring cargo, it seemed fitting to use their Spire Sense Cloud Vessels API. So, we reached out to Spire Maritime and they so graciously provided us access to use their full product suite which included Predict AI, a machine-learning feature capable of predicting positions up to 8 hours in the future. 👨‍👨‍👧‍👦 Our teams and their solutions Team insight Team insight developed a solution to bring supply chain transparency as a business case. 👉 Follow us here, on LinkedIn or Twitter to know when we publish a blogpost about this solution in more detail. Team insight — Track “Feeding the future”Team insured Team insured developed a solution to bring premium costs down by focussing on risk management and creating transparency in the liability of cargo. 👉 Follow us here, on LinkedIn or Twitter to know when we publish a blogpost about this solution in more detail. Team Insured — Track “Future of cargo insurance” 🏆 The award goes to… Both teams came up with killer solutions. But so did the competition. Team Insight was able to make it as the runner-up for their challenge “Feeding the future”. Meanwhile Team Insured didn’t quite make it in the final 3. Admittedly, we were a bit annoyed after our win last year. But we’ll be looking forward to the next challenge. Big thanks to the organizers Odyssey for hosting a great event and taking good care of us once again. The Odyssey hackathons are always a unique experience. Also kudos to TVM & Nutreco, our track sponsors who gave all the info needed to come up with our solutions. Their team of specialists stood by us to inform us about the challenges in their industry. Last but not least, we would like to thank Spire Maritime who allowed us to use their API during the period of the hackathon. And of course Brainjar for participating in the event with us. https://medium.com/media/36bbdc2b398756075b5a16011c8090bc/href Odyssey hackathon 2019 —  Short summary was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  3. Photo by Ben Pattinson on UnsplashWorking with different programming languages allows you to compare certain features between them. Convergence between languages, (and also frameworks) becomes noticeable: It seems new language- and framework features are “borrowed” from other language- or frameworks, or are sometimes the result of “taking the best of both worlds”. Multi-value return statements in programming languages is one such feature. This post will focus on this concept, compared between the languages Golang, C# and TypeScript. What are multi-value return statements anyway? Let’s start with Golang. Golang is a (very) strict language, which forces you to write clean code. It also heavily relies on multi-value return methods, like for explicit error-handling for example. func Hello(input string) (output string, err error) { if len(input) == 0 { return "", errors.New("Blanks not accepted") } return input + "!", nil } func main() { out, err := Hello("Josh") if err != nil { log.Fatal(err) } // continue with 'out' parameter } Golang forces you to use all variables you define, otherwise, the code won’t compile. (Did I mention Golang was strict?) Which in turn forces you to really think about error-handling up-front, instead of an afterthought. (“Oh right, exception handling…”). In the case where you don’t need any of the return values, you can use the underscore to signal Golang you are not interested in this output variable: _, err := Hello("Josh") // Just checking the error... So why is this useful? Explicit exception handling is just one good example, but is specific to Golang. Another, more generic, benefit is that you don’t need another ‘wrapper’ object or data-structure with the sole purpose of delivering exactly one output parameter. For example, using the explicit Tuple data structure in C# , is a thing of the past. Multi-value return statements weren’t part of C# for a long time. But since C# 7.0, we can write something like this: public (bool Valid, string Message) Validate(Model model) { if (model.BuildingYear == null) { return ( true, "Warning - No building year supplied." ); } if (model.BuildingYear >= 1000 && model.BuildingYear < 10000) { return ( true, null ); } return ( false, $"Error - BuildingYear {model.BuildingYear} is not in the correct format YYYY." ); } Calling the method, looks like this: var (isValid, message) = Validate(myModel); // now you can use 'isValid' and 'message' independently. Although C# code will still compile if you have defined unused variables, IDE’s like JetBrains Rider will advise you to rename those unused variables to underscore’s, just like in Golang’s syntax. // Rider IDE will advice you rename 'message' // to '_' if this parameter is not used. var (isValid, _) = Validate(myModel); What about TypeScript? With TypeScript (and also JavaScript ES6), you cannot return multiple values from a single method, like the way Go and C# are doing. Here, you mostly still use (nameless) javascript objects that are the ‘wrappers’ around multiple values you want to return. However, since ES6, the same end-result can be achieved with the use of destructuring. This gives you syntactic sugar to easily access multiple values from a return statement in a single call. Here is a TypeScript example, with the same logic as the one from the C# example: private validate(model: Model): { valid: boolean, message: string } { if (model.buildingYear === undefined) { return { valid: true, message: 'Warning - No building year supplied.' } } if (model.buildingYear >= 1000 && model.buildingYear < 10000) { return { valid: false, message: null } } return { valid: false, message: `Error - BuildingYear ${model.buildingYear} is not in the correct format YYYY.`} } Although we are returning a ‘wrapper’ object instead of multiple values natively, by using destructuring, we can have direct access to the individual parameters inside the object. const { valid, message } = this.validate({ buildingYear: 1600 }) // You can now use 'valid' and 'message' independently. Compare the statement above to calling the method in the C# example and you’ll see the syntactic resemblance immediately. Conclusion Programming languages keep evolving. And upon doing so, the best concepts are sometimes getting ‘borrowed’ from, or influenced by other languages, which can only be seen as a good thing! Thanks for reading. About multi-value return statements was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  4. CMC puts out a great product. Love the simplicity of the site.
  5. Overall Ripple is selling for a great price. I don't hold Ripple currently, but it's something I plan on buying in the future.
  6. The Problem Our current Software Development lifecycle at work is straightforward: We have a development, a staging, and a production environment. We use feature-branches and pull-request where developers review each other PR’s before it gets merged (and auto-deployed) into development. On development, it gets tested by test-team. And once approved, gets pull-requested and accepted to the staging environment, where business can test it as well before going to production. All is fine, except that if test-team disapproves a certain feature, then development is in a kind of blocked state, containing both features who have passed by test-team, together with features who are disapproved by test-team. We cannot decide that “feature A and B can go to staging now, but feature C cannot”, since all three features are already on a single branch (dev-branch). Test team cannot decide that “feature A and B can go to staging now, but feature C cannot” We could try to use something like git cherry-pick but we rather not starting to mess with git branches. Besides, the underlying problem is that test-team should be able to test these features independent of each other. A more ideal solution would be to have separate deployment environments for feature-testing. And so the following idea emerged: The Objective For provisioning environments for deploying PR’s, different options exists. Whatever option is chosen, it is important to follow the concept of cattle, not pets, resulting in that these environments should be easy to set up, and also easy to break down or replace. We chose to use Kubernetes for this situation (although TerraForm would also be a good fit). Since we are already using Azure DevOps (formally know as Visual Studio Team Services — VSTS), this platform will connect the dots and give us centralised control over the processes. The plan can be summarised as follows: Dockerize it The first step is dockerize your application components, so they can be easily deployed on a kubernetes cluster. Let’s take this straightforward tech stack as example: We have an angular front-end, a .NET Core back-end, and Sql Server as database. Since PR environments should be cattle, even the database is dockerized. This results in completely independent environments, where the database can be thrown away after testing is done. Dockerize the back-end component Probably the easiest of the 3 components. We have a .NET Core back-end. For this, we use a multi-stage dockerfile, so that the resulting image only contains the necessary binaries to run. # First build step FROM microsoft/dotnet:2.1-sdk AS build WORKDIR /app # config here... RUN dotnet publish -c Release -o deploy -r linux-x64 # Second build step FROM microsoft/dotnet:2.0-runtime-jessie AS runtime WORKDIR /app COPY --from=build <path/to/deploy/folder> ./ ENTRYPOINT ["dotnet", "Api.dll"] Dockerize the front-end component (SPA) A little bit more difficult, since Single Page Applications, like Angular, are mostly hosted as static content. This means some variables should be defined at build time, like an api-host for example. See this link for more information on how to configure this with docker builds. Knowing these variables in advance imposes some challenges as we will see below. Dockerize Sql Server Sql Server already has official docker images that you can use. However, what we would like to do, is make sure that every time a new environment is setup, the database is pre-populated with a dataset of our choice. This would allow us for more efficient testing. To achieve this, we can extend the current (sql-server) docker image with backups, and package the result as a new docker image! More details on how to achieve this can be found in this gist. Your docker file will look something like this: FROM microsoft/mssql-server-linux COPY . /usr/src/app ENTRYPOINT [ "/bin/bash", "/usr/src/app/docker-entrypoint.sh" ] CMD [ "/opt/mssql/bin/sqlservr", "--accept-eula" ] If you don’t want any data pre-populated, you can use the official microsoft/mssql-server-linux image straight from DockerHub instead. To make sure all docker containers play nicely together, you could use a docker-compose file to wire them all up and see if everything works locally, before trying things out in the cloud. Create VSTS Build Pipelines Once we got our docker images, we’ll want to push them to a container registry, like Elastic Container Registry (ECR) for example. Of course, we don’t want to push locally build docker images to ECR directly. We’ll want an automated build tool do this work for us instead! Lots of build tools exists today. Here, we’ll be showing how to do things with Azure DevOps / VSTS. In VSTS, can you can implement your build processes in Build Pipelines and Release Pipelines. It’s perfectly possible to put everything in a Build Pipeline without using the Release Pipeline, but this split-up will give you some benefits that we’ll see later. For step 1 (building docker images) and step 2 (pushing images to ECR), we’ll use a Build Pipeline. Below is an example of a setup for the UI docker image build pipeline. In VSTS, you have the option to choose between ‘click-and-drag’ kind of build process setup, or use the YAML based (infrastructure-as-code kind of) setup. For each type of the docker image, we’ll create a separate Build Pipeline, so we can exploit parallel build processes when necessary. Click-and-drag kind of build setupGreat! After all 3 build-pipelines for the 3 components are configured, we can start configuring triggers on when these builds are run. Configure triggers for Build Pipelines Azure DevOps allows you to program very specific triggers, actions and gates. For triggering these Build Pipelines automatically, we can setup Branch Policies on a specific branch. In our case, on the development-branch. Example of configuring build triggers on specific branches in specific circumstances.Packaging kubernetes yaml configuration files The output of Build pipelines can be turned into artifacts in Azure DevOps. These artifacts can then be used as input for Release pipelines as a next phase. Build pipeline config for packaging kubernetes yaml filesBecause we’ll need the kubernetes yaml configuration files during the Release phase, we’ll need another Build pipeline which packages these files as an artifact. This Build pipeline will look something like this. Create a VSTS Release Pipeline Release Pipelines are used as a next phase. It uses the output produced by our Build Pipelines as input. Of course, the output of our docker build-pipelines are on ECR, not on Azure DevOps. The kubernetes yaml files are the only input used by the Release phase. The kubernetes cluster itself will pull the images straight from ECR when needed. (This sounds easier than done: EKS, AWS’s managed kubernetes solution, uses its own authorization mechanism, which does not play nicely with kubernetes own auth-mechanism. The solution consists of deploying a cronjob which will pull for new secrets once in a while, that will allow your cluster to be able to successfully authorize with ECR. This blogpost describes the solution in more detail). Overview of a Release PipelineIn a Release Pipeline, you can setup you release strategy with components called ‘Stages’. Inside these stages, you can define a number of jobs and tasks, just like in a Build Pipeline. Take note of the names of the ‘Stages’ in this Release pipeline, given the names pre-dev-stage-1 and pre-dev-stage-2. These names can be dynamically retrieved in the tasks through parameters. The ‘stage’ name for example can be retrieved by using #{Release.EnvironmentName}# in expressions. We’ll use these values in 2 situations: As namespaces within our kubernetes cluster As part of a dynamic domain name Apply kubernetes yaml file for specific namespace It was this blogpost that helped me define setup everything in VSTS with kubernetes. By using the Release.EnvironmentName -parameter as namespace , you’re able to deploy complete new environments for each Stage you define. In our case for pre-dev-stage-1 and pre-dev-stage-2 . In this scenario, we’ll expose our 3 services via LoadBalancers. (Exposing the database here is not necessary, but helpful if we want to be able to directly connect a local client to the database for test-purposes). $ kubectl get svc --namespace=pre-dev-stage-1 NAME TYPE CLUSTER-IP EXTERNAL-IP sql-server-01 LoadBalancer xxx.elb.amazonaws.com api LoadBalancer yyy.elb.amazonaws.com ui LoadBalancer zzz.elb.amazonaws.com Let’s look at what we have here: Each of these services has there own external-ip address which is great. However, remember from before that the UI is build as static sources, which are being hosted from within a container. We have no way to know upfront what the External-IP of the API service will be, which we will actually need upfront during docker build(because AWS will give these loadbalancers random names). One way of solving this problem is using predefined domain names, so the UI can be build with such a predefined domain name. However, this gives us a new problem: Every time the ExternalIP changes, we need to modify DNS again and again to connect the ExternalIP of the Loadbalancer to the predefined domain we have chosen. Luckily, this problem can be solved thanks to ExternalDNS. ExternalDNS and Cloudflare to the rescue ExternalDNS is a tool that can be deployed within your kubernetes cluster. You can configure this service so it has direct access to your own DNS provider. In my case, I used Cloudflare, but this can be any DNS provider which is able to support ExternalDNS (see the list on github of supported DNS providers). At regular intervals, it will scan your current config on specific tags which will tell the ExternalDNS service that it should update the DNS provider with the provided URI in the tag. For example, take a look at the following yaml configuration. --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: myapp-api name: myapp-api annotations: external-dns.alpha.kubernetes.io/hostname: myapp-api-#{Release.EnvironmentName}#.example.com external-dns.alpha.kubernetes.io/ttl: "300" #optional spec: type: LoadBalancer ports: - name: "80" port: 80 targetPort: 2626 selector: app: myapp-api status: loadBalancer: {} --- Be adding these extra annotation in my existing service, my external-dns service will be triggered to update my DNS (in this case Cloudflare) to match the correct LoadBalancer. Great! Fully automated! And yes, it will also clean up your DNS entries afterwards if these services are removed again from the cluster. DNS entries automatically populated by ExternalDNSImportant note: DNS updates can be quite slow, so depending on a range of many factors, this could take a while to propagate… or not. Conclusion With this setup, we can deploy manually or semi-automatic test environments from within Azure DevOps! Choose in which environment you want to deploy certain builds. Add new stages when desired!Thanks for reading. Cheers. Deploying test environments with Azure DevOps, EKS and ExternalDNS was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  7. A collaboration for clinical trials on the blockchain between Boehringer Ingelheim and TheLedger The process of discovering a potentially new drug until it becomes available to patients takes on 15–30 years. And on average brings a cost of 1.3 billion euros. Boehringer wants to speed up and optimise part of this chain. Clinical studies. Because 80% of the clinical trials are delayed. By working on digitisation and focusing on the patient, Boehringer is convinced that this process is more efficient and therefore faster. The patient owns his own data, enabling him to submit and confirm additions and changes himself. This will increase transparency. The patient knows at any time which data is available and can be viewed by the parties involved. Digitisation will increase efficiency because information flows to all parties more smoothly and reliably. In case of important new information for the safety of the patient, it can be taken, confirmed and signed directly via the smartphone or tablet, for example. The reliability and data integrity will be increased by blockchain technology. This results in faster and better recruitment. Find the right hospitals for the right study in an instant and thus recruit the right patients and get the medicine to the patient more quickly. Not only does this have enormous advantages for the patient, but hospitals can also jump on the boat of digital transformation. By keeping hospital and anonymised patient profiles on the blockchain, it is easier to find the right hospitals for the right study in an instant and thus recruit the right patients and get the medicine to the patient more quickly. Current process: Paper Before trials can be executed on humans, they need to run through a complex and intensive process. This process will not be discussed. We’ll start when the trial is verified and accepted for execution by all involved parties. Boehringer sends out the requirements to conduct the study to several hospitals, after a CDA (Confidential Disclosure Agreement) is signed by those hospitals. What equipment they need, how many doctors, how many patients, etc …. When a hospital fulfils all the requirements, they can start patient recruitment. After the recruitment, patients fit to participate in the study are selected. These patients need to sign a paper Informed Consent (IC). It explains the whole process, risks, schedules, etc. It needs to be signed with a doctor, so patients can ask any question they have. This Signed Informed Consent (SIC), which is paper, is kept at the hospitals. Now it gets interesting. Every time something changes about the study, the patient needs to go to the hospital and sign the new informed consent again. Some studies are conducted over a few years, where visits are every X months. If they forgot to go to the hospital to sign the new informed consent and a visit is near to i.e. get some blood, it can not be taken. When the study doctor is not present, the new informed consent cannot be signed, thus the visit needs to be rescheduled, thus delaying the trial. Our proof-of-concept solution: Hybrid Patients are anonymised on the blockchain. Only the hospital may link an anonymous patient of the blockchain to a real patient based on the paper signed informed consent that is stored on their own private database. A hybrid solution for the PoCVisibility There are 3 parties who can view the patients’ data: Boehringer as the pharma company, the hospitals and the patient. Boehringer can see from every anonymous patient its data and actions from the blockchain (not the signed informed consent). The hospital and patient can see both the anonymised data from the blockchain and the signed informed consent. How does it work? When a patient is ready to register anonymously on the blockchain, he gets a number. This number is written on the paper informed consent that needs to be signed. This way the anonymous patient’s ID is linked with the identity written down on the informed consent. When registering, the document is uploaded and saved on the private off-chain database of the hospital. Data integrity of off-chain documents When the patient is registered, a hash of the document is calculated. This hash, path and version of the document are saved with the anonymous patient on the blockchain. At the first login, the patient needs to sign this uploaded document digitally. So future digital signatures can be compared with this initial one and the patient can digitally check if the document uploaded is the one he signed on paper. When the patient requests its data, the smart contract will go to the path, get the document, calculates the hash and compares it with the hash saved on the blockchain. If the hash is a mismatch, the user will get an error and the system will know there has been tampered with the uploaded document and measurements can be taken. Updating through digital signatures Like mentioned before, the initial document is signed on paper, uploaded and signed digitally. When there is an update of the study and a new informed consent needs to be signed, the patient is notified. He can read this new document from home and sign it digitally in an instant. When having questions, he can call the doctor. A new informed consent needs to be signed by a minimum of 2 parties: patient and doctor. When the patient signs the new document digitally then the doctor can sign this document when the patient visits. He can only sign after the patient has signed the document due to rules in the smart contract. Architecture of Proof-of-Concept Note that the backend service consists of 3 different services: an API-gateway, a chain service to communicate with the blockchain and a document service for the documents uploaded and stored on amazon s3 bucket. I hope you enjoyed this article. If you got excited about blockchain and want to know how this technology can transform and add value to your business, just contact us @ TheLedger https://medium.com/media/8a6c0c43bbe0a570fde1bde7a2b801aa/href PharmaChain: Proof-of-Concept was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  8. send us stickersConnecting IoT, AI, and Blockchain through a collaboration between Innovation Unplugged, Craftworkz and TheLedger That’s right, we participated in a hackathon again! First of all a shout out to Junction and all involved parties and sponsors for organizing this amazing hackathon. As well a shoutout to Innovation Unplugged and Craftworkz to collaborate with us and developing this amazing solution and taking on this wonderful adventure with us! The #chainporthack was a splendid edition with 220 participants in Antwerp and at the same time 220 participants in LA, tackling the same challenges listed below in the image. challengesWe mainly focussed on the ‘Interport’ and ‘Safety & security’ challenge but implemented some features of gamification, sustainability, and process & document flow in it as well. As for other hackathons, we started from scratch and after 48 hours our solution was hosted on AWS. Live and ready to use! The only thing that was made beforehand, was a 3D printed container. Yeah, I just said 3D-printed, pretty cool, isn’t it? So let’s get on with it and showcase what we’ve built. Interport: ETA(I) The main issue from the interport challenge was the Estimated Time of Arrival (ETA). The arrival of late shipments costs the companies and ports a lot of money. If they could predict the time more accurately, they could use resources better and more efficiently. We started thinking: “How can we make it smart and correct?”. Looking at the current situation and currently available solution. And of course, we instantly thought of Artificial Intelligence (AI). Leveraging its power to predict the ETA was a genius idea. That’s why we called it ETAI! How does ETAI works? Every container has a route to follow. Let’s say from LA to Montreal to Barcelona to Antwerp by boat and from Antwerp to Hamburg by truck. Knowing its predefined route and arrival at each port, there can be done a first estimation if the shipment will arrive at the right time at the destination. No AI needed for this. But then it gets interesting. Data has shown that arriving late at one of the previous destinations is not really a good indicator for the prediction of the ETA. Board and props for demonstration used at the hackathonHere is where we implemented the AI. The algorithm will check the nautical conditions to predict the new ETA. Because wind speed, wind direction and sea-state (high waves, etc …) have a huge impact on a boat when sailing those kinds of distances. E.g.: a captain will decrease speed when there are high waves on the sea. Taken these parameters in to account between all the stops, the AI will predict if the container will still arrive on time, even when it is late at one or multiple of the stopovers. Safety & security Actually, this is the coolest part we did in my opinion. Inside the 3D-printed container, there were some IoT devices. These devices were measuring: tilting, temperature, humidity, water and eSeal. So when the container got tilted, a red tilting alert is shown instantly on the dashboard and those actions are stored on the blockchain. Real-time container informationOk cool. Tilting, getting a red alert on the dashboard, nothing fancy really. But now comes the most interesting part. When the eSeal is broken and the doors of the container are opened, a picture is taken from the thief, shown on the dashboard and an alarm inside the container is triggered! An extension would be that this face would be interpreted by AI and matched with databases from Interpol, ports security and other databases where criminals are stored. Real-time eSeal openingSustainability For the IoT device(s) plugged in the container, we need a power source. So we started thinking again (did a lot of thinking during that weekend). When looking around, the watch of Jonas Snellinckx came to my mind. He has an automatic watch (self-winding). An automatic or self-winding watch is a mechanical watch in which the natural motion of the wearer provides energy to run the watch, making manual winding unnecessary. A container is always in motion due to the motion of the ocean and even when being driven by a truck. So what if we covered the floor of the container with special tiles that convert the kinetic energy of the ‘bouncing’ to electrical energy. The excessive energy is stored in a battery in case there would be no motion. The most amazing part of this solution is that we are talking about 100% self-generated energy and that such tiles already exists! Gamification with self-sovereign identity We wanted to incentivise the dockworkers for quick document handling. When completing the tasks needed to be done on the container, they would receive some kind of points. Having enough points for doing a good and fast job, they could exchange them for extra vacation of other benefits. But we turned 180 degrees and added self-sovereign identity through uPort. A captain can claim his badge when done a perfect shipment. When having a lot of these badges, captains can ambiguously claim and proof they have done a lot of perfect deliveries. Document & process flow We can predict a more accurate ETA and can check the conditions of the container throughout the whole route. I already hear you thinking: “Aren’t there any penalties related to some conditions?”. And the answer is YES. These conditions and actions are written in smart contracts and stored on the (Hyperledger fabric) blockchain. E.g.: When a certain temperature is above the maximum limit, the smart contract will be triggered and penalties are added. At the end of the trip, all the penalties are summed up and can be viewed transactionally through the history that is saved on the blockchain. What has that to do with document & process flow? The rules of these smart contract are defined in a paper contract, which is linked to the container and can be viewed in the same application as well. This way there can be no dispute about the penalties added. Simon says raise your leg as high as you can! But at every port, a container has to go through a process, some checks you might say. We implemented that as well. At every port, 3 new actions (Simon says) were generated. This time again, when not all actions were done, penalties are added. A real example of a “Simon says” can be: “Move container to place X”. real-time data dashboardReach out to us @ TheLedger. If you are interested in AI, reach out to Craftworkz. For all things IoT, please contact Innovation Unplugged. https://medium.com/media/36bbdc2b398756075b5a16011c8090bc/href Real-time connected container tracking at the chainport hackathon was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  9. The Village Lawyer (1621) — Pieter Brueghel the YoungerEliminating Paper — Contract Management Technology is continuously improving legal processes. It is easing the burden of administration, improving communication, and making the work of lawyers more efficient. This allows lawyers to focus on valuable and creative ways to serve their clients while minimising time spent on slow and costly routine tasks. The Contract Management space is a crowded and turbulent one. Technology is being implemented in existing workflows to help with data ingestion and management. When we take a closer look, we recognise three main categories trending in the space. Legaltech companies are now providing access to legal agreements without the necessity of hiring a lawyer. The legal documents include copyright registrations, wills, intellectual property, and other standard contracts. This increased accessibility and affordability of legal services is a major step in the innovation of the industry. Document Assembly services offer DIY Legal Forms and templates to build contracts. It also encircles the design of workflows, allowing consumers to tailor a legal document to their personal needs. Lastly, Artifical Intelligence is being utilised to review contracts. The software searches for risks and conflicts with the company’s predefined legal policies. It is trained to understand legal concepts, by being fed hundreds of thousands of documents. This drastically cuts down the costs of the Contract Approval process and acts as a first-pass review. The New Kid on the Blockchain With the emergence of Blockchain and Smart Contract Technology, a new player has entered the game. It enables the realisation of a new concept: Smart Legal Contracts. These are digitised agreements, with parameters readable by both human and machine. This opens the door for an automation of transactions, live visibility to contract state and other contract management processes like signatures and renegotiation. Read more about in our previous publication. High-Growth Use Cases The transition from traditional, manual legal processes to a connected and automated environment, creates a plethora of new commercial use cases. I will now cover some specific use cases that benefit from the implementation of this new technology. Covenant will serve as an application layer for Contract Management. The Smart Legal Contracts are built on Accord Project, a techno-legal foundation that provides an open source technology stack for developing these digitised contracts. 🏠 Real Estate Buildings and other assets are being filled with sensors, creating an abundance of new data. Using this data as a source for Smart Legal Contracts, a wider range of activities and processes can be automated with lower risk, cost and enhanced transparency. Ownnr is a unique digital platform for property management. Their solution aims to save time and money spent on managing real estate. They offer direct communication with tenants, real-time visibility to payments, property information, repairs, and other valuable insights. Users are also able to assemble electronic contracts and ultimately sign them with a Digital Signature, provided by Connective and itsme. Integrating Covenant into this flow will valorise the contract by adding machine-readable parameters to the legal prose. In the image below, this is represented by the Smart Clause. The contract will listen to a predefined trigger, in this case, the signatures of the two parties. Once confirmed, the contract will automatically execute a predefined output. This can be a notification or a transaction. To achieve a truly self-aware contract, the document can be connected to multiple data sources. ERPs, Blockchains and IoT Networks provide valuable, trusted information that enables contracts to show real-time status and react to key events that happen in the real world. For example, local authorities often aim to influence economics processes through the provision of grants or subsidies. The Ownnr platform can be connected to a government API, screening for subsidies and automatically claiming them for the end users. 🏗️ Equipment Leasing Devices are upgrading at an increasing rate. People and businesses are less incentivised to invest in new equipment because of this. They then proceed to invest in less technologically advanced devices, lowering their general productivity. Utilising performance-based financing or a pay-per-use financing plan, more consumers can enjoy the benefits of using high-end devices and machinery. This comfort gives small and medium-sized corporations a chance at competing with the larger players in the industry. The Blockchain serves as an immutable record of the machine’s history. This data encompasses product usage, maintenance information, ownership, and lifecycle efficiency. The analysis of this data proves to be incredibly valuable to various stakeholders, including Original Equipment Manufacturers, consumers, and financers. In this scenario, Smart Legal Contracts can function as reusable, scalable and automated assets that handle the payments, micropayments and insurance of the leased equipment. 💡 Intellectual Property The Internet has disrupted traditional media and has given birth to new channels, enabling consumers to create and distribute original content. Platforms like YouTube, Instagram and Medium are gaining popularity, allowing users to start a career as Content Creator. Simultaneously, complex legal challenges arise. It is necessary to protect the Intellectual Property of these professionals, shielding them from copyright infringement lawsuits. Blockchain and Smart Legal Contracts offer a transparent and reliable way to claim ownership. The content can be stored on the blockchain, accompanied by a timestamp, licenses and other properties. This awards the producer with full control of the distribution and monetisation of his assets. About Me I am a 22-year-old Student-Entrepreneur with an expertise in Web Technology and Digital Product Development. My interests include innovation, design, psychology, philosophy and financial markets. I am currently doing a three-month internship at the amazing Blockchain Consultancy Startup TheLedger. I am always open for a good conversation: Contact me. Read More TheLedger Accord Project LawGeex LegalZoom Eliminating Paper — Contract Management was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  10. Building Castles in the Air — Smart Legal Contracts “If you have built castles in the air, your work need not be lost; that is where they should be. Now put the foundations under them.” — Henry David Thoreau, Walden 🔮 The Contracts of Tomorrow Contracts are the foundation of the economy. This economy is adopting technology with an exponential rate. Connections between people, businesses and devices are rapidly growing and real-time. Meanwhile, contracts are disconnected from all this information. They are static paper (or PDFs) documents, that are out of sync with real-time events and are slow and costly to manage. However, with the emergence of Blockchain and Smart Contract technology, the concept of a ‘Smart Legal Contract’ suddenly isn’t that far fetched. These are human-readable contracts with machine-readable parameters, with clauses that can be executed by computer logic. The self-aware contracts can then be connected to a variety of Data Sources, like IoT networks, ERPs, Blockchains and other Shared Ledgers. This opens the door for an automation of transactions, live visibility to contract state and other contract management processes like signatures and renegotiation. Does this sound like a fugazi, a whazy or a woozie to you? Let’s break it down. Matthew McConaughey, The Wolf of Wall Street 🤖 The Technicals Legal Contracts In order to innovate traditional agreements, one must first understand the definition of a contract. The Draft Common Frame of Reference (DCFR) is a draft for the codification of the European contract law and related fields of law and it states the following: Article II. 1:101: Meaning of Contract and Juridical Act A contract is an agreement, which gives rise to, or is intended to give rise to, a binding legal relationship or which has, or intended to have some other legal effect. In general, a contract can be concluded if the following points are met: At least two parties are involved — natural persons or legal persons. There is an intent of the parties to have legal consequences. A sufficient agreement is reached. Next, let us compare its properties with those of the novel technology, Smart Contracts. Blockchain and Smart Contracts A Smart Contract is a computer program that allows for credible transactions without third parties. These transactions are trackable on the blockchain and have an immutable character. It is, in essence, a piece of code, designed to perform pre-defined actions when certain conditions are met. The irony of the situation is that it is neither smart nor a legally binding contract. The protocol is subject to the quality of the written code. It can contain bugs and is irreversible. It is, however, able to self-execute transactions, which is why it is a core building block of a Smart Legal Contract. Smart Legal Contracts Smart Legal Contracts are legal agreements that can be read and executed by a machine. It contains the legal prose of a lawyer, a digital signature and finally hashes the document. The Accord Project’s open source technology enables the creation of legally binding contracts that become ‘self-aware’ through connection to external data sources and other information systems. This enables automation of transactions and other contract management processes. They provide open source software tools that form a triangle of model-logic-language of functionality. The combination of these three elements allow for contract templates to be edited, analysed, queried and executed. My internship project carries the name Covenant. I aim to research the functionality of Smart Legal Contracts and ultimately find a fitting use case for it in a modern business process. Covenant is an application layer for Contract Management of Smart Legal Contracts. The three crucial elements of this platform are Digital Identity, Digital Signature & Smart Legal Contract Templates. It is necessary to integrate a Digital Identity system that is supported by governing institutions. In Belgium, itsme is a popular Federated Digital Identity Provider. It is built by a consortium of leading banks and mobile operators. In the future, this service can be replaced by the decentralised variant: Self Sovereign Identity. The contracts will be built with Smart Clauses provided by Accord Project. This model is supported by an increasing number of leading law firms. Utilising this framework will aid in the acceleration and adoption of the technology. Lastly, writing identifying information to a Public Ledger could provide more integrity and data assurance. The Ethereum Blockchain is a prime example of this type of ledger. Because contracts contain discrete information, content of the document can be separately stored on a private ledger. A glance at the High-Level Architecture of Covenant — It is subject to change over time.The next publication of this series will cover specific use cases that benefit from this new technology. Additionally, it will contain an analysis of the challenges and pitfalls that arise with them. 🕵️ ‍About Me I am a 22-year-old Student-Entrepreneur with an expertise in Web Technology and Digital Product Development. My interests include innovation, design, psychology, philosophy and financial markets. I am currently doing a three-month internship at the amazing Blockchain Consultancy Startup TheLedger. I am always open for a good conversation: Contact me. TheLedger is part of a larger group called Cronos Groep. It is an ecosystem of businesses and a framework which helps entrepreneurs to build out their business. They provide services like fleet and HR so startups like us only have to focus on our core business. Cronos Groep has holdings in more than 370 companies in various sectors and is actively involved in the start-up of some 20 companies per year. Within this group, we belong to a smaller group called IBIZZ. IBIZZ stands for IBM, Open source and Innovation. This is where our love for Open source innovative technology comes from. We’re agnostic, but we also have some IBM in our veins. 🗺️ Explore More TheLedger Clause Accord Project Agreements Network Ricardian Contracts Building Castles in the Air — Smart Legal Contracts was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  11. Within Flanders, there are a multitude of road managers and dozens of other parties, who rely for their daily operations on accurate information about roads, road amenities, electro-mechanic and telematic installations, or road assets in short. There is a need for information about ownership and management of the assets, but also about specific properties of the assets. However, the multitude of parties, the necessary exchanges and the absence of one overall responsible make the situation very complex. In this blog post, we try to explain how blockchain technology could help alleviate these problems. This initial investigation was done by TheLedger in close collaboration with Agentschap Wegen en Verkeer (AWV), and in assignment of Agentschap Informatie Vlaanderen. Current road asset management There are a number of road managers within Flanders, each responsible for a set of roads and their associated assets. These road managers include the road and traffic agency (AWV), several public-private partnerships, and the different cities and municipalities. Other parties, such as utility companies, public transport companies, and water road managers are impacted by the status of the road assets as well. They typically also have their own assets, which might be impacted in case of changes to the roads or the assets. Many parties are involved in the management of roads and road assets or are impacted by them.For the various road assets, it is important to know who the owner, the manager, and possible contractor are, because maintenance is required for these assets in order to ensure smooth traffic and to guarantee safety. Additionally, the correct party must take responsibility if a problem arises or in case an accident occurs due to poor maintenance. Due to the multitude of parties owning road assets, however, there often is a lack of clarity about who is the manager or owner. Each party has its own overview of assets and thus its own view on the truth. There is thus a need for a single version of the truth regarding the ownership of the road assets. Furthermore, a lot of information about the assets is exchanged between the various road managers, but also with contractors and utility companies that carry out road works and maintain their assets. However, information threatens to leak away. For example, it happens that contractors have a lot of information about the assets, but that this information does not reach the managers or owners. Inefficient data exchange can lead to extra costs due to product failure, wrong deliveries or the human intervention that is needed to obtain the information. There is a need for an easy way to share asset information. In order to achieve this, several measures are already being taken. First of all, the use of BIM standards will facilitate the exchange of information, since this way everybody speaks the same “language”. The AWV also focusses on the controlled delivery of data via their application DAVIE (Data Acceptance, Validation and Information Extraction), via which contractors submit data to the agency. An additional aspect that could help, is to actually share the road asset data, as compared to throwing it over the wall, as is the case in some situations. Blockchain-based road asset management Blockchain is a technology that offers transparency and trust, and that allows data and logic — for example as an interpretation of agreements made — to be shared directly between different parties. These aspects of blockchain could help road managers to exchange information about road assets across different stakeholders and to determine who is the owner or manager of an asset. As such, this “road asset blockchain” would form a shared platform between the various road authorities, which provides an unambiguous version of the truth and ensures smooth and transparent information exchange. For this solution, we assume that a standard for roadside asset information has already been agreed upon. Within this “road asset blockchain”, different road authorities together form the blockchain consortium, and each consortium partner shares its asset information on the network. This way, we can easily obtain a single version of the truth regarding the ownership of road assets. Road authorities thus each register their assets, and with the help of geospatial queries, it is possible to detect and remove assets that are registered twice. With such a system it is also easier to exchange other asset information. All information is already in the network, and ownership is simply transferred. Furthermore, logic could enforce for example that every asset must have an owner and a manager, that a certain agreed-upon asset standard is followed, or that maintenance must be done at a specific frequency. The transfer of ownership can also be handled using the shared logic. Additionally, logic could determine who may see which data. Keep in mind however that each party participating in the blockchain network has a copy of the data. Blockchain as a piece of the puzzle The consortium members own and manage the “road asset blockchain” together, and together they decide on the data and the rules in the network. The blockchain should, however, be seen as only a piece of the puzzle: it forms the common layer where road asset data, logic and functionalities are shared. This common layer can be addressed by each of the consortium parties from their own systems, whereby an interface or front-end is provided for the users. This interface can be a new application or can be an existing application that is extended with the new functionalities. The users can also be internal employees as well as external parties, such as contractors with whom the road manager cooperates. It is possible that the data is first enriched with additional information from its own systems, or that additional rules are applied within the own systems. The consortium parties are also free to make certain functionalities available to users or not, and they can also do this in different ways: either via API or via an application with screens. In principle, the consortium can also build an application jointly — as a kind of central party — that can, for example, be used by citizens to report defects of road assets. A notification can then automatically be sent to the owner or manager. Note that in such a system, we would not have to waste time looking for the one responsible. Advantages of a blockchain solution A “road asset blockchain” has certain advantages. It forms a single vision of the truth, without the need for a third party or central administrator. After all, there is currently no central authority that could take up this role, nor is there currently a distributed system which the different parties could join. The proposed system enables the various parties to share the data and logic as equals and also stimulates the reuse of data. The distributed nature of blockchain also ensures that the blockchain consortium can continue to exist in the long term and could survive institutional changes. Blockchain offers transparency and traceability. This could strengthen confidence in contractors. In addition, the timestamp of the information becomes increasingly important. Finally, a blockchain network also allows automation between different companies by means of its smart contracts and imposes rules on the network. A typical alternative to a blockchain solution is to create a central database where all data (in this case concerning all road assets) is gathered and managed by a central authority. The big question here is who should fulfill this role, and to what extent data with this party is trusted with this data. As stated above, the decentralized and transparent character of blockchain also has certain advantages. Next steps With this blogpost, we explained how blockchain could support road asset management, creating a shared single version of the truth on asset ownership across different road managers and smoothening information exchange. This post was a result of a preliminary investigation on the possibilities and advantages of using blockchain for road asset management. Next steps will be discussed internally. The further working out, development and roll-out of a solution is however a joined task for all partners and stakeholders. https://medium.com/media/36bbdc2b398756075b5a16011c8090bc/href How blockchain could support road asset management was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  12. Student & labor mobility are well established within Europe. For example, in 2015 about 1.6 million students were undertaking tertiary level studies in a foreign country, across the EU-28 [1]. When doing this, students need to present their diploma such that the admission requirements can be checked. Likewise, employees should show their diplomas to proof their qualifications when applying for a job. No European authentic diploma database In this era of increased student & labor mobility, we should be able to easily obtain a cross-country overview of one’s diplomas and to check the authenticity of these diplomas. Keep in mind also that diploma fraud is still a current issue, and in many cases even go unnoticed. However, a European central authentic diploma database does not exist. Even within Belgium, there is no one central solution. In Flanders, the LED is used as the authentic source, but there is no equivalent in Wallonia. In the end, the schools are contacted to confirm the authenticity of a diploma, which means a lot of administrative work. Furthermore, when going abroad, your diploma also has to be recognized, which can be a cumbersome process. Now, imagine for a second that you are a refugee, you have lost your paper diploma and your school doesn’t exist anymore. Imagine the administrative processes you face at that point [2]. And let’s admit: even if you never go abroad for work or studies, managing your degree is an annoying process. You have to keep the actual inconveniently sized paper diploma, scan it and show it when needed (if you still know where it is at that moment). Let it be clear that there is a need for an easy way to authenticate diplomas and to exchange them across borders. Regional ad hoc solutions exist, but these are difficult to scale, and each has its limitations. Certified for life solution In assignment of Informatie Vlaanderen, AHOVOKS and la Fédération Wallonie-Bruxelles, TheLedger worked out a blockchain solution for this. We prototyped a decentralized system in which different governments and schools can join, and which enables their students to obtain their complete and authentic diploma profile and share it internationally. Blockchain “Blockchain solution” you said? Blockchain is an append-only distributed database, with on top of it shared business logic. This technology enables different parties to share data immediately without the need for a central administrator. With permissioned or enterprise blockchains, the data is only shared with certain parties, such that not just everybody can see and access the data. As such, a blockchain forms a decentralized system, maintained by a consortium of partners. The consortium will decide on the data and rules in the system and will decide which new partners can join. Of course, such a solution will only work if the consortium has a shared incentive. When the data is shared among all relevant stakeholders, this system could form a single source of the overall truth. All transactions are immutably logged within a blockchain, thereby offering build-in auditability and traceability. These aspects together ensure data integrity. Furthermore, a blockchain allows sharing not only data, but also business rules. This means that we could enforce certain rules over the network, and also enable inter-company automation. Blockchain consortium The idea for the certified for life project is that the different governments together form a blockchain consortium. Together, they will be able to provide the civilians with a cross-country and authentic diploma profile. In countries where the government does not interfere with education, or in the case of private schools, the schools themselves could become part of the blockchain consortium. This idea is also visualized in Figure 1. Figure 1: A blockchain consortium is a group of parties with a common incentive to share data and business logic.Blockchain functionality So different governments can together maintain a diploma blockchain, but which functionalities did we include? First, governments and school should be able to add diplomas to the blockchain. Governments will likely use batch processes to upload the data they already have in their regionally centralized system (i.e. in case such a system exists). Schools can then add missing diploma data manually (on demand), for example in the case of older diplomas. Civilians should be able to consult their diplomas and manage the visibility of individual diplomas. For sharing their diploma profile, the civilians can use access keys or access links. Different access links with different time frames can be created to distribute to different people. A third-party user can then consult this profile using the access link. When accessing the data, this anonymous user should provide a reason for accessing the data, since he is accessing personal data. The fact that the access link was used, will be reported to the individual concerned, together with the reason provided by the anonymous user, as shown in Figure 2. This way, we bring ownership back to the individual. Figure 2: User interface for a civilian. Here, the civilian can see a timeline with all actions that happened to his diploma profile. E.g. diplomas which were added or access links which were created, but also the fact that an access link was actually used.In the built prototype, schools have an overview of their students (i.e. of the diplomas obtained at that school), but they can also add possible future students and consult their profiles, if this student gave them permission (access key). Furthermore, because different governments might use a different diploma standard, we followed the “bring-your own-standard” principle. Of course, in a later stage, other functionalities could be added. Here are some examples: Include diploma supplements & transcripts of records. These documents provide additional valuable information which is for example also needed for the admission checks at schools. Include mappings (or even groupings) between diploma standards, which could mean a great deal for the automatization of diploma recognition and equivalence. Link the diploma profiles to the processes of study program registration and admission, allowing for further automation and acceleration of administration. Consider mergers and discontinuations of schools for reasons of governance. Include other learning institutions and certificates, in order to build complete resumes for individuals. Include accreditors and sworn translators. … Identification of civilians There is an aspect we did not discuss yet, but which is of utmost importance for the diploma case. Governments and schools assign diplomas to individuals, and therefore they have to uniquely identify the individuals. However, each government knows its civilians by a unique identifier which is only known within that country. There is no such thing as a European national number. You could have a national number in more than one country (e.g. if you study abroad or if you have a double nationality), but no records are kept of their linking. Unfortunately, even the combination of your name, date of birth and place of birth is not unique. How to create a cross-country overview of one’s diplomas, if we do not have a unique way to identify people across countries? A solution is to let the civilians link their different national identifiers in the current system, whereby they have to sign this transaction digitally with both identifiers. We followed this idea for the prototype. A prerequisite for this is that civilians can digitally proof that a certain national identifier is theirs. This is in fact not that straightforward, since for example foreign students don’t usually get a national ID card in that foreign country. This problem could be alleviated with the eIDAS project, which will allow foreigners to identify themselves on governmental sites using their own national ID, which would then be translated to an identification number in that country. Blockchain architecture & set-up So different governments can share diploma data and functionality using the certified for life blockchain solution. However, what is often forgotten when thinking about blockchain, is that blockchain is a backend component: it forms a shared database and shared functionality but for example does not contain a front-end. Furthermore, it is up to each consortium partner to integrate its existing IT systems with the blockchain. As such, blockchain is only a piece of the puzzle. Already existing regionally centralized diploma databases — such as the LED — can be synchronized with the blockchain. The consortium partners can also create user interfaces for other stakeholders: e.g. a front-end for civilians, and an API for schools. Figure 3 visualizes this architectural set-up. Of course, we did not implement the complete picture for the prototype, but enough to make the concepts and ideas clear. The set-up for our prototype is shown in Figure 4. The development was done on Hyperledger Fabric (HLF). Figure 3: TO BE architecture of enterprise blockchain solutionFigure 4: Architectural set up for the prototypeAdvantages of the proposed solution The proposed solution provides a cross-country diploma overview for civilians and creates a platform where the authenticity of diploma data can be checked. As explained above, ownership and ease of use for the civilian are central aspects of the solution (see also Figure 5): he can decide who can see his profile by creating access links or access keys, and he gets insight into all actions done on his profile. Figure 5: The certified for life solution brings ownership and ease of use for civilians.The decentralized nature of the system implies that no one central administrator needs to be appointed, and that the system could outlive the lifespan of individual institutions. It is up to the consortium to decide if new partners (i.e. schools or governments) can join this decentralized system. This cross-country system allows not only to share data, but also business rules: the way the data is handled is enforced over and controlled by the network. By design all transactions — i.e. diploma additions or even modifications — are immutable, traceable and auditable. Therefore, the integrity and authenticity of the diplomas are ensured. Furthermore, the solution can also be integrated with governments’ and schools’ current IT systems, allowing for easy ways to update and receive student information. The certified for life blockchain solution can be seen as a part of the puzzle that helps smoothen and simplify the administrative processes of governments and schools (see also Figure 6). Figure 6: The certified for life solutions can be easily integrated with schools’ and governments’ IT systems and will smoothen and simplify their administrative processes.Lessons learned & alternative architectures In the set-up for the prototype, we exchange all diploma information via the blockchain, meaning that both the diploma data, the person’s national number and the permissions (access keys) are stored on the blockchain. Using this set-up, we obtain the most advantages and ease of use for the civilians, and we could leverage the shared business rules on blockchain the most. However, concerns are raised in terms of the “right to be forgotten”, increased risk for data breaches (due to duplication of personal data across different consortium partners) and the potential misuse of the data by individual consortium partners. Indeed, even if the data is encrypted, if the logic should be able to decrypt the information, then the consortium partners themselves are in principle able to decrypt and read the data. Solutions exist, but they limit the advantages of the solution as well. For example, we could also work out a solution where only the proof of the diploma is kept on the blockchain in the form of a hash, and the permissions and the encrypted diploma data is kept in the “side databases” of Hyperledger Fabric version 1.2. Here, the diploma data could be symmetrically encrypted with a key, which is by itself asymmetrically re-encrypted with the public key of each individual user that gets access. In these side databases of HLF v1.2, private transactions can be kept, and data can be deleted. In this solution, the private key of the user that got permission to view the diploma data is needed to decrypt the symmetric key and thus the diploma. Therefore, malicious consortium partners are not able to decrypt the diploma data. However, we will not be able to work with access links as proposed in the prototype, and we won’t be able to leverage the shared logic as much (such as including mappings between diplomas, as discussed above). A second option is the idea of “self-sovereign” identity. Here, the proof of the diploma — again in the form of a hash — would be kept on the blockchain, and a digital version of the diploma would be given to the civilians. This way, the individual is truly the owner of his own personal data and only proofs are kept on-chain. However, be aware that then the student would e.g. get the digital diploma at graduation, and he would have to keep the data himself on his devices or on his cloud storage. To conclude, I would like to add that there is a non-technical solution as well for the risk of malicious consortium partners: that is to put in place the necessary legal agreements with these consortium partners. Keep in mind as well that it is the existing consortium that decides on new partners, so the best option might be to only involve only parties in which you have relative trust. Next steps With this prototype, we came to a solution for the authentication and cross-country exchange of diploma data using blockchain technology. This prototype shows that the immutability and access control ensure data integrity and proof of authenticity, and that the build in traceability increases insight, trust & control for the individual. Next steps are to involve the necessary stakeholders at European level and start creating a consortium. The consortium should come to an agreement on which set-up to follow and can then together to take this set-up to production. Special thanks Besides to our clients — Fédération Wallonie-Bruxelles, Informatie Vlaanderen and AHOVOKS –, we would like to send out a special thanks to our stakeholders at the different schools and educational institutions that participated in this project. We received a lot of valuable input from them during the project, and their enthusiastic collaboration and constructive feedback ensured a workable prototype and important steps taken towards making this a successful European project. https://medium.com/media/36bbdc2b398756075b5a16011c8090bc/hrefWhere to go next → Who is TheLedger and what do we do besides writing blogs and doing hackathons → From Analog to Blockchain → Bring your own standard Certified for Life — International exchange & authentication of diplomas via blockchain was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  13. Source: Hyperledger Fabric 1.2 DocsThe latest v1.2 release of Hyperledger Fabric introduces private data stored in SideDBs. This solution offers you the possibility to build “GDPR compliant” blockchain solutions. The following article will show you how to use these private DBs. If you are not yet familiar with the concept of private data on Hyperledger Fabric, check out the previous article written by Jonas Snellinckx. Note: This is a technical article. We will make use of our Hyperledger Fabric Network boilerplate and our chaincode utils on Github. Private data use case To demonstrate the use of private collections, we’ll use the classic car example. The initLedger function will create ten new cars in our collection. All these cars can be accessed and view by anyone in the network. Let’s create a private repository we only want to share with one other garage we own. Collection configuration To get started, we first need a collections configuration file collections_config.json which includes the collection name and policy. The policy is similar to an endorsement, this allows us to use the already existing policy logic like OR, AND, … operators. https://medium.com/media/671c3a4778783d1b5691e8fb7443adff/hrefWriting chaincode Our Hyperledger boilerplate which we are using already contains a function to create and query for private data. This is the original createCar function. https://medium.com/media/5cf93e026bca467e9eb0127f11e1c699/hrefFor adding data to a private collection ( carCollection ), we just need to specify to which collection we want to add the data. https://medium.com/media/827920c073324eae592d3b92df7444c7/hrefNext, for querying a car we have to specify the private collection we want query. https://medium.com/media/f5046970fc099a2fdcd295a9186d6149/hrefFor deleting and updating objects, you do the exact same. Chaincode best practices It will certainly occur that part of your data will be stored on-chain visible for anyone in the network. However, some of the data is private and will be stored in the private collection accessible by the peers defined in the collection config. We suggest using the same key for storing the object in the public and private collection which makes it easier for retrieving the data later on. Tip: Our stubHelper implements a generateUUID that creates a deterministic ID for storing an object. Interested in starting your own blockchain project, but don’t know how? Do you need help starting your token sale or having one audited? Get in touch with TheLedger. Conclusion We demonstrated here, how easy it is to write chaincode using our package. You don’t have to, but it speeds things up quite a bit. These examples can help you get started, if you want to read our documentation, you can check out our github. We would also encourage you to contribute and make this package better. 👌 https://medium.com/media/36bbdc2b398756075b5a16011c8090bc/hrefWhere to go next → Curated list of Hyperledger Fabric resources → Our Hyperledger Fabric REST server Typescript boilerplate → Network boilerplate including Nodejs chaincode example → Node Chaincode Utils on Github https://medium.com/media/87aa79925624de1fd16e8eb0374f05c1/href A Beginner’s Guide to SideDBs and Private Data for Hyperledger Fabric Nodejs Chaincode was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  14. The next release of Hyperledger Fabric v1.2 (just released) introduces private data. This feature can fix a bunch of confidentiality issues which are present with the technology today. One of those is GDPR compliancy. The following article requires some general Fabric knowledge. If you are not yet familiar with Hyperledger Fabric, I suggest you watch this video series or the introduction on the official documentation. Quick note: I’m not a GDPR expert. The way you implement this technology is not guaranteed to be GDPR compliant. But private data enables GDPR compliant blockchain applications. What’s that? SideDBs? Private Data on a blockchain? The current way of implementing confidentiality is by using channels. It is discouraged to create a lot of channels for a large network just to achieve confidentiality. Creating channels for every transacting party brings a lot of overhead like managing policies, chaincode versioning, and Membership Service Providers. All the data would have to be either public or private. If you would want to transfer an asset to a party outside a channel, it would be a burden. This is where private transactions come in. Private data allows you to create collections of data using policies to define which parties in the channel can access the data. This access can simply be managed by adding policies to the collections. This allows for some data to be public and some to be private for some parties. Current issue Imagine the marbles example. You would like to save which marbles belong to whom. All marble data can be public except for the owner and price. These cannot be visible to anyone because of privacy reasons. Prices should not be made public for future transactions. Maybe you need to track this data because you need to validate whether the person selling the marble is the actual owner. A (fictional) marble auditing firm will be a partner in this to validate fraud. If you’re not using channels, in 1.1, everything you do will be recorded to state of the ledger. This is not GDPR compliant. Hows does private data solve this? Image 1: From slidedeck “Privacy Enabled Ledger” https://jira.hyperledger.org/browse/FAB-1151The first set, “Channel Read-Write Sets” is what the current architecture looks like. Every transaction is recorded in the state and history. The second set shows a shared private state between 2 peers, each in their separate organization. This state is replicated across these peers according to policies. The 3rd set shows the true power of private transactions. The collections can be omitted from some members. This means you can set up separate private collections for each Marble seller — Marble auditor relation. These collections allow for some data to be added, while the main data is still stored in the main state and ledger. Image 2: Private state https://hyperledger-fabric.readthedocs.io/en/release-1.2/private-data/private-data.htmlAuthorized peers will see the hash of the data on the main ledger, and the actual data in the private database. Unauthorized peers will not have the private database synced and will only be able to see the hash on the ledger. Since hashes are irreversible, he will not be able to see the actual data. High-level, the issue resolved using private data looks like this. Image 3: Marbles issue made GDPR compliantHow does this apply to GDPR? My colleague Andries, made a clear article about the problems with GDPR and blockchain. I’ll describe the problem here in short but if you want to read the full article, please go here. The problem Data which has been added to the ledger, cannot be deleted. So when adding personal data, this is an issue for GDPR. One can not simply delete blocks. One solution which is used frequently is storing data off-chain like shown in the image below. But this solution is rather complex because you manually have to look up the validity of the data as well as the links to the data on the blockchain. Private data as a solution Private data is basically the solution above in Fabric itself without the extra work. It solves multiple issues with GDPR. Limitation of data You shouldn’t have access to data you’re not using Private data solves this issue by not controlling access using policies similar to endorsement. By using this policy logic already present in fabric, we can use OR, AND,… operators to define which parties have access. // collections_config.json [ { "name": "collectionFarmer-Store", "policy": "OR('FarmerMSP.member', 'StoreMSP.member')", "requiredPeerCount": 0, "maxPeerCount": 3, "blockToLive":1000000 } ] Limitation of usage You should only keep your data as long as you need For collections, you can specify a blockToLive in the policy. This does exactly what it sounds like. You can define how long a collection should be kept in terms of blocks. This means, old data in the private database will automatically be purged after x amount of blocks and you do not have to worry about having unused data. The hashes in the actual blocks will not be removed. // collections_config.json [ { "name": "collectionFarmer-Store", "policy": "OR('FarmerMSP.member', 'StoreMSP.member')", "requiredPeerCount": 0, "maxPeerCount": 3, "blockToLive":1000000 } ] Right to be forgotten This is the same as the previous item, but items can manually be removed. Since nothing is written to the ledger, except for the hash, after this procedure, this item will not exist anywhere. Caveats This solution is only GDPR compliant when Parties are not malicious If they have bad intentions, they can just copy and share this data with external parties. This is a general issue and not specific to blockchain technology. This is where the rules in your consortium come in. You need to have clear rules with clear consequences defined to make sure nodes do not get malicious. 2. When it’s implemented correctly Like mentioned at the top of this article. It’s GDPR compliant if you write it correctly. You have to be cautious of what you place on the public ledger and what on the private and how long you will keep this data. It’s not bulletproof just yet Your chaincode will be replicated across all peers. And so will the other configuration files. This means the collections_config.json will also be replicated to all peers in order for the system to properly setup and know about these private collections. This means every member can see who’s doing business or sharing secret data with who. They can’t see the actual data but disclosing the participant’s information is still a confidentiality issue. This issue should be addressed in 1.3. Collections have to be defined up front Currently, private collections have to be defined up front. This is hard to maintain when there is a large amount of different party-party transaction. But it’s usable. Version 1.3 will introduce implicit collections which are basically collections which can be made on the fly and even passed on to other members. Where to next? ⇢ Private data tutorial ⇢ Private data docs ⇢ Curated list of Hyperledger Fabric resources https://medium.com/media/b25e47b1cb010c88aca6964a51115c85/href Private data, a possible built-in “GDPR compliant” solution for Hyperledger Fabric was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.
  15. and writing blogs about doing hackathons. 😲 Who are we We’re a blockchain consultancy start-up from Belgium (& The Netherlands). We’re currently 9 large with 8 people located in Belgium and 1 in The Netherlands. We are part of a larger group called Cronos Groep. It is a network/ecosystem of businesses. It’s a framework which helps entrepreneurs to build out their business. They provide services like fleet and HR so start-ups like us only have to focus on their core business. Cronos Groep has holdings in more than 370 companies in various sectors and is actively involved in the start-up of some 20 companies per year. Within this group, we belong to a smaller group called IBIZZ. IBIZZ stands for IBM, Open source and Innovation. This is where our love for Open source innovative technology comes from. We’re agnostic, but this is why we also have some IBM in our veins. 💪 What we do We’re consultants at our core. We provide our blockchain expertise to other companies. This expertise goes from Hyperledger Fabric, Ethereum, BigchainDB, Stellar to Hyperledger Sawtooth, IOTA,… from analysis to development. There are so many cool distributed ledger technologies out there, this is why we try to spread our knowledge an try to be technology agnostic. Besides regular consultancy, we also felt the need to provide help creating and auditing Token sales. We also give blockchain awareness sessions and workshops to help the companies we work for to further grasp the potential of blockchain in their business. 🎒 What we’ve done You can always find an updated list on our website https://theledger.be/projects. But we’ll include some links here as well. Hackathons Hack for Diamonds 2018 — Winners blockchain challenge 💎 DiaVest — Winning solution for the hackfordiamonds hackathon Blockchaingers hackathon 2018 — Winning the “Digital nations infrastructure” track What we’ve built to win the worlds biggest blockchain hackathon of 2018! Some of our projects Please get in touch if you’re looking for a technology partner yourself. Greencards — B-Hive By placing insurance (green) cards on the blockchain we are able to digitally give access in a controlled manner while speeding up the manual process of requesting access. Project Digital insurance cards - TheLedger - Blockchain projects Competencies on the blockchain — GO & VDAB Providing a Bring-Your-Own-Standard platform to map competencies in different standards from different companies. This to ultimately help people without the correct diploma’s to get their job using achieved competencies. Bring-Your-Own-Standard Project Lifelong learning - TheLedger - Blockchain projects KYC on the blockchain — B-Hive Providing a universal platform to validate KYC identities. Project KYC Identity - TheLedger - Blockchain projects Smart contract audit — Inwage Ethereum token sale contract audit. Moria Token Where to go next Our website: https://theledger.be AI & prototyping as a service: https://craftworkz.co Robotic process automation: http://roborana.be Devops powers: http://flowfactor.be Cronos groep: https://cronos-groep.be IBIZZ: https://ibizz.be https://medium.com/media/b25e47b1cb010c88aca6964a51115c85/href Who is TheLedger and what do we do besides writing blogs and doing hackathons was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story. Check out the full article on Medium.

Important Information

By using BLOCKCHAINTALK.ORG, you agree to our Terms of Use.