HFSS: machine learning applied to a patch#

This example shows how you can use PyAEDT to create a machine learning algorithm in three steps:

  1. Generate the database.

  2. Create the machine learning algorithm.

  3. Implement the model in a PyAEDT method.

While this example supplies the code for all three steps in one Python file, it would be better to separate the code for each step into its own Python file.

Perform required imports#

Perform required imports.

import json
import os
import random
from math import sqrt

import joblib
import numpy as np
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVR

from pyaedt import Hfss
from pyaedt.modeler.advanced_cad.stackup_3d import Stackup3D

Set non-graphical mode#

Set non-graphical mode. "PYAEDT_NON_GRAPHICAL" is needed to generate documentation only. You can set non_graphical either to True or False.

non_graphical = os.getenv("PYAEDT_NON_GRAPHICAL", "False").lower() in ("true", "1", "t")

Generate database#

This section describes the first step, which is for generating the database.

Generate input#

Generate input randomly by creating a function with four inputs: frequency, substrate permittivity, substrate thickness, and patch width. Frequency ranges from 0.1 GHz to 1 GHz. Permittivity is from 1 to 12.

The following code generates a database of 1 frequency x 2 permittivity x 2 thickness x 2 width. It creates eight cases, which are far too few to use to train the model but are a sufficient number for testing the model. Later in this example, you import more than 3,300 different cases in a previously generated database of 74 frequencies x 5 permittivity x 3 thickness x 3 width.

Thickness is generated from 0.0025 to 0.055 of the wavelength in the void. Width is generated from 0.5 to 1.5 of the optimal theoretical width:

c / (2 * frequency * sqrt((permittivity + 1) / 2))

For each couple of frequency-permittivity, three random thicknesses and three random widths are generated. Patch length is calculated using the analytic formula. Using this formula is important because it reduces the sweep frequency needed for the data recovery. Every case is stored in a list of a dictionary.

dictionary_list = []
c = 2.99792458e8
for couple in tuple_random_frequency_permittivity:
    list_thickness = []
    list_width = []
    frequency = couple[0]
    permittivity = couple[1]
    er = permittivity
    wave_length_0 = c / frequency

    min_thickness = 0.0025 * wave_length_0
    inter_thickness = 0.01 * wave_length_0
    max_thickness = 0.055 * wave_length_0
    for i in range(2):
        random_int = random.randint(0, 1)
        if random_int == 0:
            thickness = min_thickness + (inter_thickness - min_thickness) * random.random()
        else:
            thickness = inter_thickness + (max_thickness - inter_thickness) * random.random()
        list_thickness.append(thickness)

    min_width = 0.5 * c / (2 * frequency * sqrt((er + 1) / 2))
    max_width = 1.5 * c / (2 * frequency * sqrt((er + 1) / 2))
    for i in range(2):
        width = min_width + (max_width - min_width) * random.random()
        list_width.append(width)

    for width in list_width:
        for thickness in list_thickness:
            effective_permittivity = (er + 1) / 2 + (er - 1) / (2 * sqrt(1 + 10 * thickness / width))
            er_e = effective_permittivity
            w_h = width / thickness
            added_length = 0.412 * thickness * (er_e + 0.3) * (w_h + 0.264) / ((er_e - 0.258) * (w_h + 0.813))
            wave_length = c / (frequency * sqrt(er_e))
            length = wave_length / 2 - 2 * added_length
            dictionary = {
                "frequency": frequency,
                "permittivity": permittivity,
                "thickness": thickness,
                "width": width,
                "length": length,
                "previous_impedance": 0,
            }
            dictionary_list.append(dictionary)

print("List of data: " + str(dictionary_list))
print("Its length is: " + str(len(dictionary_list)))
List of data: [{'frequency': 150000000.0, 'permittivity': 11.12, 'thickness': 0.022366146081558284, 'width': 0.5851828511576299, 'length': 0.2913597446832381, 'previous_impedance': 0}, {'frequency': 150000000.0, 'permittivity': 11.12, 'thickness': 0.062449036425084745, 'width': 0.5851828511576299, 'length': 0.27127457226904245, 'previous_impedance': 0}, {'frequency': 150000000.0, 'permittivity': 11.12, 'thickness': 0.022366146081558284, 'width': 0.5038138849180984, 'length': 0.29281440890976346, 'previous_impedance': 0}, {'frequency': 150000000.0, 'permittivity': 11.12, 'thickness': 0.062449036425084745, 'width': 0.5038138849180984, 'length': 0.27401591278143184, 'previous_impedance': 0}, {'frequency': 150000000.0, 'permittivity': 4.63, 'thickness': 0.020372751853711634, 'width': 0.8713557129679793, 'length': 0.45498788163790604, 'previous_impedance': 0}, {'frequency': 150000000.0, 'permittivity': 4.63, 'thickness': 0.056129076483691476, 'width': 0.8713557129679793, 'length': 0.4348780524291964, 'previous_impedance': 0}, {'frequency': 150000000.0, 'permittivity': 4.63, 'thickness': 0.020372751853711634, 'width': 0.5843386393406469, 'length': 0.4588971100524726, 'previous_impedance': 0}, {'frequency': 150000000.0, 'permittivity': 4.63, 'thickness': 0.056129076483691476, 'width': 0.5843386393406469, 'length': 0.4424897039189111, 'previous_impedance': 0}]
Its length is: 8

Generate HFSS design#

Generate the HFSS design using the Stackup3D method. Open an HFSS design and create the stackup, add the different layers, and add the patch. In the stackup library, most things, like the layers and patch, are already parametrized.

desktopVersion = "2022.2"

hfss = Hfss(
    new_desktop_session=True, solution_type="Terminal", non_graphical=non_graphical, specified_version=desktopVersion
)

stackup = Stackup3D(hfss)
ground = stackup.add_ground_layer("ground", material="copper", thickness=0.035, fill_material="air")
dielectric = stackup.add_dielectric_layer("dielectric", thickness=10, material="Duroid (tm)")
signal = stackup.add_signal_layer("signal", material="copper", thickness=0.035, fill_material="air")
patch = signal.add_patch(patch_length=1009.86, patch_width=1185.9, patch_name="Patch", frequency=100e6)

Resize layers around patch#

Resize the layers around the patch so that they change when the patch changes.

stackup.resize_around_element(patch)
True

Create lumped port#

Create a lumped port that is parametrized with the function of the patch.

patch.create_lumped_port(reference_layer=ground, opposite_side=False, port_name="one")
<pyaedt.modules.Boundary.BoundaryObject object at 0x0000028B83542340>

Create line#

Create a line that is parametrized with the function of the patch length. This ensures that the air box is large enough in the normal direction of the patch.

points_list = [
    [patch.position_x.name, patch.position_y.name, signal.elevation.name],
    [patch.position_x.name, patch.position_y.name, signal.elevation.name + " + " + patch.length.name],
]
hfss.modeler.primitives.create_polyline(position_list=points_list, name="adjust_airbox")
pad_percent = [50, 50, 300, 50, 50, 10]
region = hfss.modeler.primitives.create_region(pad_percent)
hfss.assign_radiation_boundary_to_objects(region)
<pyaedt.modules.Boundary.BoundaryObject object at 0x0000028B828F6670>

Plot#

Plot patch

hfss.plot(show=False, export_path=os.path.join(hfss.working_directory, "Image.jpg"), plot_air_objects=True)
Machine learning applied to Patch
<pyaedt.generic.plot.ModelPlotter object at 0x0000028BFA6C4BE0>

Create setup and sweep#

Create a setup and a sweep by frequency.

print(len(dictionary_list))
for freq in frequency_list:
    frequency_name = str(int(freq * 1e-6))
    setup_name = "Setup_" + str(frequency_name)
    current_setup = hfss.create_setup(setupname=setup_name)
    current_setup.props["Frequency"] = str(freq) + "Hz"
    current_setup.props["MaximumPasses"] = 30
    current_setup.props["MinimumConvergedPasses"] = 2
    current_setup.props["MaxDeltaS"] = 0.05
    current_setup.update()
    current_setup["SaveAnyFields"] = False

    freq_start = freq * 0.75
    freq_stop = freq * 1.25
    sweep_name = "Sweep_of_" + setup_name
    hfss.create_linear_count_sweep(
        setupname=setup_name,
        unit="Hz",
        freqstart=freq_start,
        freqstop=freq_stop,
        num_of_freq_points=25000,
        sweepname="Sweep_of_" + setup_name,
        save_fields=False,
        sweep_type="Interpolating",
    )
8

Define function#

Define a function to recover the index of the resonance frequency.

def index_of_resonance(imaginary_list, real_list):
    list_of_index = []
    for i in range(1, len(imaginary_list)):
        if imaginary_list[i] * imaginary_list[i - 1] < 0:
            if abs(imaginary_list[i]) < abs(imaginary_list[i - 1]):
                list_of_index.append(i)
            elif abs(imaginary_list[i]) > abs(imaginary_list[i - 1]):
                list_of_index.append(i - 1)
    if len(list_of_index) == 0:
        return 0
    elif len(list_of_index) == 1:
        return list_of_index[0]
    else:
        storage = 0
        resonance_index = 0
        for index in list_of_index:
            if storage < real_list[index]:
                storage = real_list[index]
                resonance_index = index
        return resonance_index

Create parametric variation by case#

Use a loop to create a parametric variation by case and associate it with a setup. The parametric variation is composed of the patch length and width and substrate permittivity and thickness. For each, measure the real resonance frequency to obtain the data length, width, permittivity, and thickness that corresponds to a resonance frequency. Use an error counter to verify that the resonance frequency is contained in the sweep. To make it easy, calculate the length of each case using the analytic formula.

error_counter = []
for i in range(len(dictionary_list)):
    dictio = dictionary_list[i]
    frequency_name = str(int(dictio["frequency"] * 1e-6))
    setup_name = "Setup_" + str(frequency_name)
    sweep_name = "Sweep_of_" + setup_name
    length_variation = dictio["length"] * 1e3
    width_variation = dictio["width"] * 1e3
    thickness_variation = dictio["thickness"] * 1e3
    permittivity_variation = dictio["permittivity"]
    param_name = "para_" + setup_name + "_" + str(i)
    this_param = hfss.parametrics.add(
        patch.length.name,
        length_variation,
        length_variation,
        step=1,
        variation_type="LinearCount",
        solution=setup_name,
        parametricname=param_name,
    )
    this_param.add_variation(
        patch.width.name, width_variation, width_variation, step=1, unit=None, variation_type="LinearCount"
    )
    this_param.add_variation(
        dielectric.thickness.name,
        thickness_variation,
        thickness_variation,
        step=1,
        unit=None,
        variation_type="LinearCount",
    )
    this_param.add_variation(
        "$cloned_Duroid__tm__permittivity",
        permittivity_variation,
        permittivity_variation,
        step=1,
        unit=None,
        variation_type="LinearCount",
    )
    hfss.analyze_setup(param_name, num_cores=4, num_tasks=None)
    data = hfss.post.get_solution_data(
        "Zt(one_T1, one_T1)",
        setup_sweep_name=setup_name + " : " + sweep_name,
        domain="Sweep",
        variations={
            patch.length.name: [str(length_variation) + "mm"],
            patch.width.name: [str(width_variation) + "mm"],
            dielectric.thickness.name: [str(thickness_variation) + "mm"],
            "$cloned_Duroid__tm__permittivity": [str(permittivity_variation)],
        },
        polyline_points=25000,
    )
    imaginary_part = data.data_imag()
    real_part = data.data_real()
    corresponding_index = index_of_resonance(imaginary_part, real_part)
    if corresponding_index == 0:
        hfss.logger.error("The resonance is out of the range")
        error_counter.append(i)
    minimum_imaginary = imaginary_part[corresponding_index]
    previous_impedance = real_part[corresponding_index]
    print("minimum_imaginary: " + str(minimum_imaginary))
    print("previous_impedance: " + str(previous_impedance))
    frequency_list = data.primary_sweep_values
    resonance_frequency = frequency_list[corresponding_index]
    print(resonance_frequency)
    dictio["frequency"] = resonance_frequency
    dictio["previous_impedance"] = previous_impedance
minimum_imaginary: -0.5391088298547048
previous_impedance: 242.59032744437778
0.13818102724109
minimum_imaginary: -0.06854448385593492
previous_impedance: 229.75576601480975
0.134133865354614
minimum_imaginary: -0.05070035747854201
previous_impedance: 275.51495582328255
0.137034981399256
minimum_imaginary: 0.02944450100351842
previous_impedance: 248.76697951207484
0.13495589823592902
minimum_imaginary: -0.14333557325782484
previous_impedance: 175.3070972002142
0.137148985959438
minimum_imaginary: 0.03763208868078106
previous_impedance: 171.1749673577154
0.13925207008280302
minimum_imaginary: 0.29876612264590785
previous_impedance: 257.8155105958846
0.144958298331933
minimum_imaginary: 0.1106682330033443
previous_impedance: 239.77821376583464
0.13941407656306298

End data recovery step#

End the data recovery step by dumping the dictionary list into a JSON file. Saving the data allows you to use it in another Python script.

json_file_path = os.path.join(hfss.working_directory, "ml_data_for_test.json")
with open(json_file_path, "w") as readfile:
    json.dump(dictionary_list, readfile)

Create machine learning algorithm#

This section describes the second step, which is for creating the machine learning algorithm.

Import training cases#

Import the 3,300 cases in the supplied JSON file to train the model. As mentioned earlier, you cannot use the small database that you generated earlier for training the model. Its 8 cases are used later to test the model.

path_folder = hfss.pyaedt_dir
training_file = os.path.join(path_folder, "misc", "ml_data_file_train_100MHz_1GHz.json")
with open(training_file) as readfile:
    my_dictio_list_train = json.load(readfile)

with open(json_file_path, "r") as readfile:
    my_dictio_list_test = json.load(readfile)

print(len(my_dictio_list_train))
print(len(my_dictio_list_test))
3330
8

Create lists#

Create four lists:

  • One for the input of the training

  • One for the output of training

  • Oone for the input of the test

  • One for the output of the test

Fill list for input of training#

Fill the list for the input of the training with frequency, width, permittivity, and thickness so that the output is the length. The objective of this algorithm is to predict the length according to the rest.

for i in range(len(my_dictio_list_train)):
    freq_width_perm_thick = [
        my_dictio_list_train[i]["frequency"] * 1e9,
        my_dictio_list_train[i]["width"] * 1000,
        my_dictio_list_train[i]["permittivity"],
        my_dictio_list_train[i]["thickness"] * 1000,
    ]
    length = my_dictio_list_train[i]["length"] * 1000
    input_for_training_list.append(freq_width_perm_thick)
    output_for_training_list.append(length)

for i in range(len(my_dictio_list_test)):
    freq_width_perm_thick = [
        my_dictio_list_test[i]["frequency"] * 1e9,
        my_dictio_list_test[i]["width"] * 1000,
        my_dictio_list_test[i]["permittivity"],
        my_dictio_list_test[i]["thickness"] * 1000,
    ]
    length = my_dictio_list_test[i]["length"] * 1000
    input_for_test_list.append(freq_width_perm_thick)
    output_for_test_list.append(length)

print("number of test cases: " + str(len(output_for_test_list)))
print("number of training cases: " + str(len(output_for_training_list)))
number of test cases: 8
number of training cases: 3330

Convert lists in array#

Convert the lists in an array.

input array for training: [[8.5343952e+07 1.8699978e+03 1.0000000e+00 1.3607236e+02]
 [8.7562560e+07 1.8699978e+03 1.0000000e+00 3.1649155e+01]
 [8.5523544e+07 1.8699978e+03 1.0000000e+00 1.6327014e+02]
 ...
 [9.5384678e+08 3.3421940e+01 1.1450000e+01 1.0995456e+01]
 [1.1057147e+09 3.3421940e+01 1.1450000e+01 2.1035936e+00]
 [9.7680755e+08 3.3421940e+01 1.1450000e+01 1.3259815e+01]]
output array for training: [1463.626    1607.3876   1427.5016   ...   36.353287   39.326664
   35.355724]

Create model#

Create the model. Depending on the app, you can use different models. The easiest way to find the correct model for an app is to search for this app or one that is close to it.

To predict characteristics of a patch antenna (resonance frequency, bandwidth, and input impedance), you can use SVR (Support Vector Regression) or ANN (Analyze Neuronal Network). The following code uses SVR because it is easier to implement. ANN is a more general method that also works with other high frequency components. While it is more likely to work for other app, implementing ANN is much more complex.

For SVR, there are three different kernels. For the patch antenna, RBF (Radial Basic Function) is preferred. There are three other arguments that have a big impact on the accuracy of the model: C, gamma, and epsilon. Sometimes they are given with the necessary model for the app. Otherwise, you can try different values and see which one is the best by measuring the accuracy of the model. To make this example shorter, C=1e4. However, the optimal value in this app is C=5e4.

svr_rbf = SVR(kernel="rbf", C=1e4, gamma="auto", epsilon=0.05)
regression = make_pipeline(StandardScaler(), svr_rbf)

Train model#

Train the model.

Pipeline(steps=[('standardscaler', StandardScaler()),
                ('svr', SVR(C=10000.0, epsilon=0.05, gamma='auto'))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


Dump model into JOBLIB file#

Dump the model into a JOBLIB file using the same method as you used earlier for the JSON file.

model_file = os.path.join(hfss.working_directory, "svr_model.joblib")
joblib.dump(regression, model_file)
['D:/Project/Project2336.pyaedt\\HFSS_WSO\\svr_model.joblib']

Implement model in PyAEDT method#

This section describes the third step, which is for implementing the model in the PyAEDT method.

Load model#

Load the model in another Python file to predict different cases. Here the correct model with C=5e4 is loaded rather than the one you made earlier with C=1e4.

model_path = os.path.join(path_folder, "misc", "patch_svr_model_100MHz_1GHz.joblib")
regression = joblib.load(model_path)
c:\actions-runner\_work\pyaedt\pyaedt\testenv\lib\site-packages\sklearn\base.py:299: UserWarning: Trying to unpickle estimator StandardScaler from version 1.0.2 when using version 1.2.1. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
  warnings.warn(
c:\actions-runner\_work\pyaedt\pyaedt\testenv\lib\site-packages\sklearn\base.py:299: UserWarning: Trying to unpickle estimator SVR from version 1.0.2 when using version 1.2.1. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
  warnings.warn(
c:\actions-runner\_work\pyaedt\pyaedt\testenv\lib\site-packages\sklearn\base.py:299: UserWarning: Trying to unpickle estimator Pipeline from version 1.0.2 when using version 1.2.1. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
  warnings.warn(

Predict length of patch#

Predict the length of the patch as a function of its resonant frequency and width and substrate thickness and permittivity.

Measure model efficiency#

Measure the model efficiency.

value: [138181027.24109, 585.1828511576299, 11.12, 22.366146081558284], prediction: 318670.84036961605, reality: 291359.7446832381
value: [134133865.35461399, 585.1828511576299, 11.12, 62.44903642508474], prediction: 274426.5946829869, reality: 271274.57226904243
value: [137034981.399256, 503.81388491809844, 11.12, 22.366146081558284], prediction: 325435.3017245196, reality: 292814.4089097635
value: [134955898.235929, 503.81388491809844, 11.12, 62.44903642508474], prediction: 272350.0395176941, reality: 274015.91278143186
value: [137148985.959438, 871.3557129679793, 4.63, 20.372751853711634], prediction: 476386.11142627336, reality: 454987.8816379061
value: [139252070.082803, 871.3557129679793, 4.63, 56.12907648369148], prediction: 466692.2547473433, reality: 434878.0524291964
value: [144958298.331933, 584.3386393406469, 4.63, 20.372751853711634], prediction: 464140.61189832294, reality: 458897.11005247256
value: [139414076.563063, 584.3386393406469, 4.63, 56.12907648369148], prediction: 442355.63005324797, reality: 442489.7039189111

The first displays are the gap (prediction - real). The second displays are the relative gap ((prediction - real)/real).

print("sample size: " + str(len(prediction_of_length)))
print("<0.5 : " + str(counter_under_zero_five))
print("<1 : " + str(counter_under_one))
print("<2 : " + str(counter_under_two))
print("<5 : " + str(counter_under_five))
print("<10 : " + str(counter_under_ten))
print(">10 : " + str(counter_upper_ten) + "\n")
print(
    "sum : "
    + str(
        counter_under_zero_five
        + counter_under_one
        + counter_under_two
        + counter_under_five
        + counter_under_ten
        + counter_upper_ten
    )
)

print("-------------------------------------------\n")
print("<0.01 : " + str(rel_counter_under_one))
print("<0.02 : " + str(rel_counter_under_two))
print("<0.05 : " + str(rel_counter_under_five))
print("<0.1 : " + str(rel_counter_under_ten))
print("<0.2 : " + str(rel_counter_under_twenty))
print(">0.2 : " + str(rel_counter_upper_twenty))
print(
    "sum : "
    + str(
        rel_counter_under_one
        + rel_counter_under_two
        + rel_counter_under_five
        + rel_counter_under_ten
        + rel_counter_under_twenty
        + rel_counter_upper_twenty
    )
)
print("average is : " + str(average_relative_gap))
sample size: 8
<0.5 : 1
<1 : 0
<2 : 1
<5 : 1
<10 : 1
>10 : 4

sum : 8
-------------------------------------------

<0.01 : 2
<0.02 : 2
<0.05 : 1
<0.1 : 2
<0.2 : 1
>0.2 : 0
sum : 8
average is : 0.044344547745165025

Release AEDT#

Release AEDT.

hfss.release_desktop()
True

Total running time of the script: ( 9 minutes 1.923 seconds)

Gallery generated by Sphinx-Gallery