All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am installing the addon 'Python for Scientific Computing (for Windows 64-bit)', but every time I do so, I keep getting this message: 'Error during app install: failed to extract app from C:\Progr... See more...
I am installing the addon 'Python for Scientific Computing (for Windows 64-bit)', but every time I do so, I keep getting this message: 'Error during app install: failed to extract app from C:\Program Files\Splunk\var\run\af3848d2600e244f.tar.gz to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\ffaabe078cc2c4c4: The system cannot find the path specified.' Also, on the home page, the left-side bar says 'Unable to load app list. Refresh the page to try again.' Even when I refresh, it still appears. I don't know if it has anything to do with my problem.
Hello everyone, I hope you guys can help me figure this out since I've been thinking a lot about it since yesterday. I'm by no means an expert in Splunk. However, I've been tasked with integrating T... See more...
Hello everyone, I hope you guys can help me figure this out since I've been thinking a lot about it since yesterday. I'm by no means an expert in Splunk. However, I've been tasked with integrating Trellix EDR log files into Splunk. I found an app in the splunkbase site (https://splunkbase.splunk.com/app/6480) that could be the answer to my task. I installed the app in the heavy forwarder as I have done before while integrating Rapid 7 logs and followed the brief guide provided by the author of the guide. However, this is where the problems start. After I configured input settings, I didn't recieve a single log file. I checked the logs and found out that the problem had something to do with SSL Certificates.   ERROR pid=7210 tid=MainThread file=base_modinput.py:log_error:309 | Error in input_module_trellix_edr_input.get_threats() - line 127 : HTTPSConnectionPool(host='api.soc.us-east-1.mcafee.com', port=443): Max retries exceeded with url: /ft/api/v2/ft/threats?sort=-lastDetected&filter=%7B%22severities%22:%20%5B%22s0%22,%20%22s1%22,%20%22s2%22,%20%22s3%22,%20%22s4%22,%20%22s5%22%5D,%20%22scoreRange%22:%20%5B30%5D%7D&from=1686690509938&limit=10000&skip=0 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)')))    I immediately started googling that error and found out that it was probably an outdated SSL Certificate. Thing is, when I connected to the heavy forwarder through SSH and tried to update the python SSL Certificates through pip I found out that you couldn't do that in a splunk server. I found a workaround that implied that I could somehow disable the SSL check, I spend hours looking at the most suspicious .py files but couldn't find where that check was made (It also didn't help that I know next to nothing of python). I also tried playing with the settings of the input, trying out different regions, etc, but it was all for naught. Ultimately, I started thinking that it was a problem caused by outdated SSL Certificates hardcoded in the app (don't really know if that's possible) I ended up deciding to contact support. It was at that time that I noticed that this app wasn't supported by Splunk and that I had to contact the developer of the app if I wanted any kind of support. I did some research on mister "Martin Ohl" and found out that he no longers works at Trellix (no wonder why the app never had an update). I went to Trellix's support page and couldn't find a support email so I started dwelling in their support and FAQ web. I could not find any single post or hint about a possible integration with any SIEM not just Splunk. So I thought that posting my case in the Splunk Community Forums was my best bet. I'd appreciate any hint, insight or even an anecdote about a similar case. If anyone has managed to integrate Trellix into Splunk It'd be a lot of help if you could share your experience. Or even if someone knows how to deal with the SSL Certificates thing. I'll be uploading a pdf file with more detail about the error log I recieved. Thanks in advance.
Greetings. Suppose I have an event schema of just a URL, where the query section of the URL may change:   ```ndjson { url: "/?a=b&c=d" } { url: "/?arbitraryKey=arbitraryValue" } ``` The KV pai... See more...
Greetings. Suppose I have an event schema of just a URL, where the query section of the URL may change:   ```ndjson { url: "/?a=b&c=d" } { url: "/?arbitraryKey=arbitraryValue" } ``` The KV pairs are arbitrary.   From sibling thread, I have already extracted KEYS, VALUES from params into multi-value fields. For example:   ``` // pseudo-code of Splunk variables, first event keys = MultiValue [ a, c ] values = MultiValue  [ b,  d ]   // pseudo-code of Splunk variables, second event keys = MultiValue [ arbitraryKey ] values = MultiValue  [ arbitraryValue ] ``` I want each KV pair added to the associated event, such that, at the end of my query, I can do interesting states. For example, i may want to `| table a, c, arbitraryKey` and see two records, corresponding to the input events above:   ```txt a, c, arbitraryKey -------- b, d, null null, null, arbitraryValue ```   Simply put, I want to derive a suite of KV pairs from an event, then merge them back into the event.   `mvexpand` creates new rows, so that's not what I want.   What other options do I have?
hi  I need some thing like the following one as the final output I have tried some thing like this...but not the one i expected...     ....query | chart values(percent) over svc by Res... See more...
hi  I need some thing like the following one as the final output I have tried some thing like this...but not the one i expected...     ....query | chart values(percent) over svc by ResponseCode     is it possible to include something like this, checked and its not working...     ....query | chart values(percent) over Channel,svc by ResponseCode       Thanks
Split pattern in CSV lookup format into multiple rows | lookup table.csv Note that the number of SubnetID-IP pair is not always 4.   It could be from 1 to 4  or more (1...  N) The example below is... See more...
Split pattern in CSV lookup format into multiple rows | lookup table.csv Note that the number of SubnetID-IP pair is not always 4.   It could be from 1 to 4  or more (1...  N) The example below is only one row, but the real data has multiple rows (eg. Comp-B to Comp-Z) Before the split:        Company        Location       SubnetID-IP Comp-A           Loc- A           [{"subnet_id":"101","ip_address":"192.168.1.1"},{"subnet_id":"102","ip_address":"192.168.1.2"},{"subnet_id":"103","ip_address":"192.168.1.3"},{"subnet_id":"104","ip_address":"fd12:3456:789a:1::1"}] After the split:  Company        Location       IP                                subnetID Comp-A            Loc-A            192.168.1.1                     101 Comp-A            Loc-A            192.168.1.2                     102 Comp-A            Loc-A            192.168.1.3                     103 Comp-A            Loc-A            fd12:3456:789a:1::1     104 Thank you so much for your help
Hello, I have two panels with different logic (various indexes and filters) that produces X and Y respectively. I want to see if we can combine X+Y=Z to create a Z number. Would you be able to do th... See more...
Hello, I have two panels with different logic (various indexes and filters) that produces X and Y respectively. I want to see if we can combine X+Y=Z to create a Z number. Would you be able to do that with Splunk? I do not want to reuse the both logics to get Z.
I have raw data file (similar to a CSV) with various records as follows:   Process alarm,5/31/2023,23:19:45,20,11PC_0201,# 2 BOILER STEAM DRUM PRESSURE,Medium,L-Lower Limit,South Main,110.0,110.0,P... See more...
I have raw data file (similar to a CSV) with various records as follows:   Process alarm,5/31/2023,23:19:45,20,11PC_0201,# 2 BOILER STEAM DRUM PRESSURE,Medium,L-Lower Limit,South Main,110.0,110.0,PSIG Process alarm,5/31/2023,22:45:07,34,25N_0004,CP - 5kV BREAKERALARMS,Medium,TIE-BRKR CLOSED,Common Signal In my transforms.conf file: [parseProcessAlarm] REGEX = Process alarm,([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+) FORMAT = Date::$1 TimeStamp::$2 TagType::$3 Tagname::$4 Description::$5 AlarmSeverity::$6 AlarmName::$7 SignalCategory::$8 Data::$9 LimitData::$10 EngineeringUnits::$11   My problem is that this expression fails sometimes because a "Process Alarm" record may only have 8 data fields after the initial "Process Alarm" wording as shown above. How can I account for this?  
Is there a way to blacklist data being ingested via Splunk Stream via src_ip?  I've tried the below but I'm not sure if it's supported or not. blacklist = src_ip="10.10.10.10"
So my based search can produce a table stats of  deployment, total hit, and time_seconds, I only need one more field that calculate TPS (total_hit/time_seconds). If it's not grouped by, it can produc... See more...
So my based search can produce a table stats of  deployment, total hit, and time_seconds, I only need one more field that calculate TPS (total_hit/time_seconds). If it's not grouped by, it can produce the TPS. The challenge how to do it if it's grouped by. base search | rex ".*wm-(?<deployment>.*?)-" | addinfo | eval t=info_max_time - info_min_time | stats count as hit max(t) as time_seconds by deployment It resulted in this table. The time_seconds shows 3600 because the date_picker was set to 60 minutes   deployment hit time_seconds a 2506 3609.000 b 30285 3609.000 c 2213 3609.000   Actually I only need to get the tps (calculated by hit/time) table which is: deployment tps a ... b ... c ...   Any idea to produce this. thanks.  PS. already browse through the community but failed to find 
Hi, I wish to pass the values of $click.value$ and corresponding cell's header (dynamically generated in splunk query) values to the search query in creating the drilldown view of dashboard. Please ... See more...
Hi, I wish to pass the values of $click.value$ and corresponding cell's header (dynamically generated in splunk query) values to the search query in creating the drilldown view of dashboard. Please let me know how to achieve that. for example if the user clicks on the cell containing 100.00 value, I need to pass the channel name adp and header value of 304 to the search query  adp can be achieved through $click.value$ , don't know for 304 (and its corresponding search field name) 
I have multiple json coming in a single event and want to extract the status of one event.  For example, I want the status of the event extract {"event": "load", "id ":132", "status": "passed"} ... See more...
I have multiple json coming in a single event and want to extract the status of one event.  For example, I want the status of the event extract {"event": "load", "id ":132", "status": "passed"} {"event": "write", "id ":132", "status": "passed"} {"event": "extract", "id ":132", "status": "passed"}      
After installing version 9.1 it seems all my SVGs on dashboard no longer have transparent backgrounds. even the one in the dashboard studio have white boxes around them.    I even tried creating ... See more...
After installing version 9.1 it seems all my SVGs on dashboard no longer have transparent backgrounds. even the one in the dashboard studio have white boxes around them.    I even tried creating the simplest transparent/no fill rectangle and there is still a solid white fill.  I validated the code works in online tools, is there a new attribute or version tag that needs to be input? 
I am fairly new to Splunk, but I am trying to get the Dell PowerScale add-on and dashboard up and running on my local Splunk Enterprise server.  I followed the installation instructions and my Isilon... See more...
I am fairly new to Splunk, but I am trying to get the Dell PowerScale add-on and dashboard up and running on my local Splunk Enterprise server.  I followed the installation instructions and my Isilon cluster is sending syslog data to the indexer and I can query the data.  However, I cannot figure out how to use the custom dashboard that is supposed to come with this app.  When I go to the Apps dropdown and select the Dell PowerScale app, it defaults to the Configuration tab.   I can select the Search tab, but I cannot find any prebuilt dashboards.  I followed the setup instructions on: https://splunkbase.splunk.com/app/2689  
I am trying to summarize AWS ELB Access Logs.  Once I get the raw URLs, I need to substitute out the path and query parameters so that like paths collapse into the same string.  I do this via a serie... See more...
I am trying to summarize AWS ELB Access Logs.  Once I get the raw URLs, I need to substitute out the path and query parameters so that like paths collapse into the same string.  I do this via a series of rex commands.  For example: | rex field=pattern mode=sed "s/(.*?\/)[^ \/]+?\.(?i)(?:gif|jpg|jpeg|png|ico)(?= |\?|;)/\1{image}/" | rex field=pattern mode=sed "s/(.*?\/)[^ \/]+?\.(?i)(?:css)(?= |\?|;)/\1{stylesheet}/" | rex field=pattern mode=sed "s/(.*?\/)[^ \/]+?\.(?i)(?:js)(?= |\?|;)/\1{script}/" | rex field=pattern mode=sed "s/(\/(?:bootstrap|jquery|timeout))\/[0-9]+(?:\.[0-9])*(?=\/)/\1\/{version}/" Everything works fine until I reach a certain number of replacements, and then the job fails.  I delete enough, and suddenly it starts working again.  I look in the search job inspector, and don't see anything obvious.  Same for the Job Details Dashboard, and the search.log.  I see nothing to indicate what the problem was.  Here's the tail end of the search.log: 07-13-2023 11:37:03.695 INFO SearchParser [29042 localCollectorThread] - PARSING: typer | tags 07-13-2023 11:37:03.900 INFO FastTyper [29042 localCollectorThread] - found nodes count: comparisons=2208, unique_comparisons=1080, terms=187, unique_terms=116, phrases=156, unique_phrases=112, total leaves=2551 07-13-2023 11:37:03.917 INFO BatchSearch [29042 localCollectorThread] - Using Batch Search 07-13-2023 11:37:03.918 INFO UserManager [29042 localCollectorThread] - Setting user context: k28500 07-13-2023 11:37:03.918 INFO UserManager [29042 localCollectorThread] - Done setting user context: k28500 -> k28500 07-13-2023 11:37:03.919 INFO FastSearchFilter [29042 localCollectorThread] - Finished initializing IndexScopedFilter - trivial=0, nTerms=0, oTerms=0, host=0, source=1, sourcetype=1, linecount=0 exactCustomCmp=0 07-13-2023 11:37:03.919 INFO UserManager [29042 localCollectorThread] - Unwound user context: k28500 -> k28500 07-13-2023 11:37:03.919 INFO BatchSearch [29042 localCollectorThread] - BatchSearch is initialized for indexes = {aws-data}, et = 1689261722.000000000, lt = 1689262622.000000000, index_et = -9223372036854775808.000000000, index_lt = 9223372036854775807.999999000, noRead = FALSE 07-13-2023 11:37:03.919 INFO BatchSearch [29042 localCollectorThread] - Clearing any DDM references 07-13-2023 11:37:03.919 INFO BatchSearch [29042 localCollectorThread] - index: aws-data dbsize=0 07-13-2023 11:37:03.919 INFO UnifiedSearch [29042 localCollectorThread] - Initialization of search data structures took 225 ms 07-13-2023 11:37:03.920 INFO UnifiedSearch [29042 localCollectorThread] - Processed search targeting arguments 07-13-2023 11:37:03.921 INFO StatsProcessorV2 [29042 localCollectorThread] - StatsV2 (stats) args: count, by, pattern 07-13-2023 11:37:03.921 INFO StatsProcessorV2 [29042 localCollectorThread] - StatsProcessorV2::processArguments: Unaligned accesses are free 07-13-2023 11:37:03.921 INFO StatsAggregations [29042 localCollectorThread] - Instantiating Stats function group_count for key=, alias=count 07-13-2023 11:37:03.921 INFO StatsContext [29042 localCollectorThread] - shouldFallbackToOldStats: _use_v2_level=USE_V2_ALL, fallback=false 07-13-2023 11:37:03.921 INFO AutoLookupDriver [29042 localCollectorThread] - Will use Lookup: LOOKUP-Domain-host 07-13-2023 11:37:03.921 INFO AutoLookupDriver [29042 localCollectorThread] - Will use Lookup: LOOKUP-cf-domains-ags-env 07-13-2023 11:37:03.938 WARN SearchOperator:kv [29042 localCollectorThread] - Could not find a transform named REPORT-EMR 07-13-2023 11:37:03.942 WARN SearchOperator:kv [29042 localCollectorThread] - Could not find a transform named REPORT-extraction_1_kp 07-13-2023 11:37:03.942 WARN SearchOperator:kv [29042 localCollectorThread] - Could not find a transform named REPORT-extracting_fi_pattern_name 07-13-2023 11:37:03.949 INFO LocalCollector [29042 localCollectorThread] - Final required fields list = AGS,AppAbbrev,ClientRequestHost,EventData_Xml,LogFileCleared_Xml,Message,RenderingInfo_Xml,System_Props_Xml,UserData_Xml,_raw,_subsecond,affectedResources,eventName,extended_properties,functionName,host,index,prestats_reserved_*,psrsvd_*,raw_msg,request,request_query,request_uri,source,sourcetype 07-13-2023 11:37:03.949 INFO UserManager [29042 localCollectorThread] - Unwound user context: k28500 -> NULL 07-13-2023 11:37:03.949 INFO UserManager [29042 localCollectorThread] - Setting user context: k28500 07-13-2023 11:37:03.949 INFO UserManager [29042 localCollectorThread] - Done setting user context: NULL -> k28500 07-13-2023 11:37:03.949 INFO UserManager [29042 localCollectorThread] - Unwound user context: k28500 -> NULL 07-13-2023 11:37:04.432 INFO ReducePhaseExecutor [29032 StatusEnforcerThread] - ReducePhaseExecutor=1 action=PREVIEW 07-13-2023 11:37:04.489 INFO SearchParser [29032 StatusEnforcerThread] - PARSING: | streamnoop 07-13-2023 11:37:04.489 INFO SearchParser [29032 StatusEnforcerThread] - PARSING: streamnoop | stats count by pattern 07-13-2023 11:37:04.489 INFO StatsContext [29032 StatusEnforcerThread] - Setting chunk size min=65536 max=1048576 double_every=100 07-13-2023 11:37:04.489 INFO StatsContext [29032 StatusEnforcerThread] - Setting max memory usage to 209715200 07-13-2023 11:37:04.489 INFO StatsProcessorV2 [29032 StatusEnforcerThread] - StatsV2 (stats) args: count, by, pattern 07-13-2023 11:37:04.489 INFO StatsProcessorV2 [29032 StatusEnforcerThread] - StatsProcessorV2::processArguments: Unaligned accesses are free 07-13-2023 11:37:04.489 INFO StatsAggregations [29032 StatusEnforcerThread] - Instantiating Stats function group_count for key=, alias=count 07-13-2023 11:37:04.489 INFO StatsContext [29032 StatusEnforcerThread] - shouldFallbackToOldStats: _use_v2_level=USE_V2_ALL, fallback=false 07-13-2023 11:37:04.489 INFO ReducePhaseExecutor [29032 StatusEnforcerThread] - ReducePhaseExecutor=1 action=PREVIEW 07-13-2023 11:37:04.490 INFO CMBucketId [29040 SearchResultExecutorThread] - CMIndexId: New indexName=aws-data inserted, mapping to id=2 07-13-2023 11:37:04.492 INFO PreviewExecutor [29032 StatusEnforcerThread] - Finished preview generation in 0.000485272 seconds. 07-13-2023 11:37:04.826 INFO UserManager [29036 SearchResultExecutorThread] - Unwound user context: k28500 -> NULL 07-13-2023 11:37:04.826 INFO UserManager [29038 SearchResultExecutorThread] - Unwound user context: k28500 -> NULL 07-13-2023 11:37:04.826 INFO ResultsCollationProcessor [29043 phase_1] - Writing remote_event_providers.csv to disk 07-13-2023 11:37:04.826 INFO UserManager [29040 SearchResultExecutorThread] - Unwound user context: k28500 -> NULL 07-13-2023 11:37:04.826 INFO UserManager [29037 SearchResultExecutorThread] - Unwound user context: k28500 -> NULL 07-13-2023 11:37:04.826 INFO UserManager [29039 SearchResultExecutorThread] - Unwound user context: k28500 -> NULL 07-13-2023 11:37:04.829 INFO DownloadRemoteDataTransaction [29043 phase_1] - Downloading logs from all remote event providers 07-13-2023 11:37:05.027 INFO ReducePhaseExecutor [29043 phase_1] - Downloading all remote search.log files took 0.199 seconds 07-13-2023 11:37:05.027 INFO DownloadRemoteDataTransaction [29043 phase_1] - Downloading logs from all remote event providers 07-13-2023 11:37:05.188 INFO ReducePhaseExecutor [29043 phase_1] - Downloading all remote search_telemetry.json files took 0.162 seconds Any advice? 
Hi all, We are looking for users that trigger one or more of these policies: policy="[DLP] -  Internal " OR policy="[SMTP] - Gmail - Internal " OR  policy="[DLP] - GDrive - Internal " AND one o... See more...
Hi all, We are looking for users that trigger one or more of these policies: policy="[DLP] -  Internal " OR policy="[SMTP] - Gmail - Internal " OR  policy="[DLP] - GDrive - Internal " AND one or more of these policies: policy="All DLP Policies" and activity=upload Please exclude all  instances from this search (provided below lookup) OR policy="[SMTP] -  Password Protected " Please exclude all  instances from this search (provided below lookup) OR alert_type=uba and alert_name=" suspicious  uploads" Please exclude all  instances from this search (provided below lookup ) eg. Full list of instance IDs: xyz.com abc.com 123.com 786.com 143.com etc. Please let me know if you have any questions. Thanks,  
Hi Team,   How can I fetch raw log   2023-07-13 02:37:14.317 [INFO ] [pool-2-thread-1] FileSensor - Total msg processed for trim reage file:241 host = lgposput50341.gso.aexp.com... See more...
Hi Team,   How can I fetch raw log   2023-07-13 02:37:14.317 [INFO ] [pool-2-thread-1] FileSensor - Total msg processed for trim reage file:241 host = lgposput50341.gso.aexp.com source = /app/abs-upstreamer/logs/abs-upstreamer.log sourcetype = 600000304_gg_abs_ipc2
Hi Team, I am creating one Splunk query as below: index="600000304_d_gridgain_idx*" sourcetype=600000304_gg_abs_ipc2  source="/app/abs-upstreamer/logs/abs-upstreamer.log" I am getting events from ... See more...
Hi Team, I am creating one Splunk query as below: index="600000304_d_gridgain_idx*" sourcetype=600000304_gg_abs_ipc2  source="/app/abs-upstreamer/logs/abs-upstreamer.log" I am getting events from another which I don't want to be included in my query I don't want this particular source ="/var/app" to be included in my query. Please guide how can I exclude that from my query  
Hi, I've been trying to get free trial for AppDynamics, it shows setting up.. almost ready and then asks me to get on a call with support to get the trial initiated. But then when I go back to trial... See more...
Hi, I've been trying to get free trial for AppDynamics, it shows setting up.. almost ready and then asks me to get on a call with support to get the trial initiated. But then when I go back to trial page again, it shows " Looks like you don’t have any products associated with your account." Please check
Hi I am struggling with an issue for days now but keep running in circles, any help is much appreciated. Below you find some content of a csv file with fields: rtime(date) , start(time) and stop(t... See more...
Hi I am struggling with an issue for days now but keep running in circles, any help is much appreciated. Below you find some content of a csv file with fields: rtime(date) , start(time) and stop(time) I am trying to use the start and stop time for a appended subsearch on other data/subsearch (by append or appendcols or whatever...)     rtime,start,stop "2023-07-02",0,0 "2023-07-03","2023-07-03 06:54","2023-07-03 22:42" "2023-07-04","2023-07-04 06:54","2023-07-04 22:24" "2023-07-05","2023-07-05 06:48","2023-07-05 21:00" "2023-07-06","2023-07-06 06:54","2023-07-06 22:36" "2023-07-07","2023-07-07 07:18","2023-07-07 22:48" "2023-07-08",0,0 "2023-07-09",0,0 "2023-07-10","2023-07-10 07:06","2023-07-10 22:06" "2023-07-11","2023-07-11 07:00","2023-07-11 22:36" "2023-07-12","2023-07-12 06:48","2023-07-12 18:48"     The subsearch looks in basic as shown below, so quit simple and straight forward : I want to count here the state of a machine as it is in a state (5 or 6). ,From there I can calcultae it to a time in hrs per day (but never mind these detail) My problem here is that I am not able to reuse the start and stop date-time fields from csv file above for the this subsearch ... I have tried many things eg: trying to use the start and stop fields form csv fiel for 'earliest' and 'latest' field in the new subsearch (append, appencols etc) ; Reformated all time in epoch or any other time-format, no succes unfortunetaly sofar.      index=<namex> AND MachineID=YY AND Tag="*Application.GVL_Hmi.Unit.PackML.Sts.State" AND (Value=6 OR Value=5 ) | timechart span=1h count(Value) as Exec | timechart span=1d sum(Exec) as Execute | eval Execute=round(Execute,1)     As said, any help, tip or direction is welcome, Thanks AshleyP
Hello everyone, I need to extract the first IP from ASA events, after the first IP sometimes there are 3 other IPs, sometimes 2, and sometimes 1 IP, but I need only the first one without depending o... See more...
Hello everyone, I need to extract the first IP from ASA events, after the first IP sometimes there are 3 other IPs, sometimes 2, and sometimes 1 IP, but I need only the first one without depending on how many IPs are after the first. 2023-07-13T11:24:25+02:00 FirASA : %ASA-6-302013: Built inbound TCP connection 2907550162 for dmzIB:192.168.7.7/49446 (192.168.7.7/49446) to inside:10.100.40.4/8080 (10.100.40.4/8080) Thank you.