All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a search query that have a |outputlookup report.csv at the end, and save that as an alert to run daily. But when I want to check it using |inputlookup report.csv, it's found no resu... See more...
Hello, I have a search query that have a |outputlookup report.csv at the end, and save that as an alert to run daily. But when I want to check it using |inputlookup report.csv, it's found no result. I check the job inspection of the alert and found the csv was ouputed at splunk/var/run/splunk/csv Can't I read file in this directory, or should I use alert action to output csv?
I wrote a screen scraping script on a server running Splunk Forwarder version 8.2.3. The script is in a file and runs find from the linux command line. It was developed with Python 3.6. I added the s... See more...
I wrote a screen scraping script on a server running Splunk Forwarder version 8.2.3. The script is in a file and runs find from the linux command line. It was developed with Python 3.6. I added the script to our Universal Forwarder local inputs.conf and I can see the script is scheduled successfully. However, it consistently fails with a message in the splunkd.log. I have no idea why the message references python3.7. It is not installed anywhere on the system 3.6 is installed in /usr/bin/python3.6. I tried changing the server.conf properties in local by adding "python.version = python3" in the [general] section and restarting, but to no avail. Please advise what else I might try. Thanks. in advance. 08-18-2022 00:19:45.525 +0000 ERROR ExecProcessor [3423479 ExecProcessor] - message from "python3.7 /opt/splunk/sjcinf8469vmw15/splunkforwarder-8.2.3/splunkforwarder/bin/scripts/scrapeGmrPage.py" /bin/sh: python3.7: command not found
Hello, I have created splunk AWS  Linux instance Instance and installed splunk enterprise on it. The install was successful and I am able to reach the login page for splunk web. When logging in, I ... See more...
Hello, I have created splunk AWS  Linux instance Instance and installed splunk enterprise on it. The install was successful and I am able to reach the login page for splunk web. When logging in, I get an error at the bottom of the page that says "Server Error". All of the necessary ports are open on the server.  I have attached a photo of the Error and the WARN I received in the splunkd logs after each failed login attempt. Any suggestions on a fix would be highly appreciated.  
I have a modular input to write to Splunk using event = Event() event.data = json.dumps(data) ew.write_event(event) This all works fine except that in some event.data records there is a detai... See more...
I have a modular input to write to Splunk using event = Event() event.data = json.dumps(data) ew.write_event(event) This all works fine except that in some event.data records there is a detail field that also contains data in json format which is written to Splunk as a string. How do I perform field extraction and index the data contained in the one detail field which is json within json?
 so it says Could not load lookup=LOOKUP-itsi_kpi_attributes   looking around find there are pointers i think. but if i click on lookup table files and filter for  itsi_kpi_attribu... See more...
 so it says Could not load lookup=LOOKUP-itsi_kpi_attributes   looking around find there are pointers i think. but if i click on lookup table files and filter for  itsi_kpi_attributes  i get ---> wait last one  If this says none what's worse this says none or no csv even though I see some pointer to a field that doesn't exist.   if I am right where is this file? if not # is what I am asking making sense #2 is there something else you'd like to see if so tell me where it is thanks.  
Hi Every one, Is it possible to modify a portion of CSV file in inputlookup? Cheers.
Hello! I am making a search that searches data from Sunday-Tuesday one week and Sunday-Wednesday the next week.  I have a field called "date_wday" which contains data values representing the da... See more...
Hello! I am making a search that searches data from Sunday-Tuesday one week and Sunday-Wednesday the next week.  I have a field called "date_wday" which contains data values representing the day of the week (sunday, monday, tuesday, .....,saturday). My search: index=blah date_wday=monday OR tuesday OR sunday My search successfully filters the data to pull from Sunday-Tuesday but how can I also have it add data from every other Wednesday? ex. Week 1: Sunday-Tuesday        Week 2: Sunday-Wednesday        Week 3: Sunday-Tuesday        ...... Thank you
Hey everyone, I recently installed the BMC Remedy Add On for Splunk and followed the directions to get setup.  I successfully connected into BMC via REST credentials, setup the remedy_fields.conf fi... See more...
Hey everyone, I recently installed the BMC Remedy Add On for Splunk and followed the directions to get setup.  I successfully connected into BMC via REST credentials, setup the remedy_fields.conf file and successfully created a ticket via search and the remedyincidentcreatestreamrest command.  My problem is automating this experience.  I created an alert based on a search (per the docs), and specified the "Remedy Incident Integration using REST API" trigger.  Looking at the splunk_ta_remedy_rest_alert.log file I see the following authentication error: 2022-08-17 15:07:35,356 ERROR pid=11181 tid=MainThread file=remedy_helper.py:create_incident:287 | Authentication failed, status_code=401, url='https://url-restapi.onbmc.com:443/api/arsys/v1.0/entry/HPD:ServiceInterface', params={'fields': 'values(Incident Number, Incident_Status)'}, response=[{"messageType":"ERROR","messageText":"Authentication failed","messageNumber":623,"messageAppendedText":"remedy_user"}] 2022-08-17 15:07:35,657 INFO pid=11181 tid=MainThread file=remedy_helper.py:create_jwt_token:162 | Successfully generated a new jwt token 2022-08-17 15:07:36,030 ERROR pid=11181 tid=MainThread file=remedy_helper.py:create_incident:287 | Error occured, status_code=400, url='https://url-restapi.onbmc.com:443/api/arsys/v1.0/entry/HPD:ServiceInterface', params={'fields': 'values(Incident Number, Incident_Status)'}, response=[{"messageType":"ERROR","messageText":"Required field cannot be blank.","messageNumber":326,"messageAppendedText":"HPD:Help Desk : Contact Company"}] 2022-08-17 15:07:36,030 ERROR pid=11181 tid=MainThread file=remedy_incident_rest_alert_base.py:post_incident:227 | [Remedy Incident REST Alert] The search name: Ingress to ICM Missing DN. Failed to Create/Update incident Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_remedy/bin/remedy_helper.py", line 432, in retry return func(account_info, *arg, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_remedy/bin/remedy_helper.py", line 288, in create_incident raise Exception(msg) Exception: Authentication failed, status_code=401, url='https://url-restapi.onbmc.com:443/api/arsys/v1.0/entry/HPD:ServiceInterface', params={'fields': 'values(Incident Number, Incident_Status)'}, response=[{"messageType":"ERROR","messageText":"Authentication failed","messageNumber":623,"messageAppendedText":"remedy_user"}] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_remedy/bin/remedy_incident_rest_alert_base.py", line 200, in post_incident proxy_config=self.proxy_config, File "/opt/splunk/etc/apps/Splunk_TA_remedy/bin/remedy_helper.py", line 454, in retry return func(account_info, *arg, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_remedy/bin/remedy_helper.py", line 288, in create_incident raise Exception(msg) Exception: Error occured, status_code=400, url='https://url-restapi.onbmc.com:443/api/arsys/v1.0/entry/HPD:ServiceInterface', params={'fields': 'values(Incident Number, Incident_Status)'}, response=[{"messageType":"ERROR","messageText":"Required field cannot be blank.","messageNumber":326,"messageAppendedText":"HPD:Help Desk : Contact Company"}]   I have a separate application creating tickets via REST and was told to use the  HPD:IncidentInterface_Create.  Not sure what the difference is (if any) to running a search is as opposed to having an alert trigger it but I am stumped.  If anyone can offer some insight I would appreciate it. Thanks! Chad
Hello,  We are using splunk cloud to centralize all our logs, and are currently struggling with Bitdefenders implementation. We have added the HTTP Event Collector, and are now struggling with the... See more...
Hello,  We are using splunk cloud to centralize all our logs, and are currently struggling with Bitdefenders implementation. We have added the HTTP Event Collector, and are now struggling with the final step of sending the logs from Bitdefender to Splunk, When i run the code to connect the two     curl -k -X POST OUR_GRAVITYZONE_API/v1.0/jsonrpc/push -H 'authorization: Basic GRAVITYZONE_API_KEY' -H 'cache-control: no-cache' -H 'content-type: application/json' -d '{ "params": { "status": 1, "serviceType": "splunk", "serviceSettings": { "url": "https://input-OUR_SPLUNK_CLOUD_LINK:8088/services/collector", "requireValidSslCertificate": false, "splunkAuthorization": "Splunk HTTP_EVENT_KEY" }, "subscribeToEventTypes": { "hwid-change": true, "modules": true, "sva": true, "registration": true, "supa-update-status": true, "av": true, "aph": true, "fw": true, "avc": true, "uc": true, "dp": true, "device-control": true, "sva-load": true, "task-status": true, "exchange-malware": true, "network-sandboxing": true, "malware-outbreak": true, "adcloud": true, "exchange-user-credentials": true, "exchange-organization-info": true, "hd": true, "antiexploit": true }, "jsonrpc": "2.0", "method": "setPushEventSettings", "id": "1" }' }      It returns the Error     { "id": null, "jsonrpc": "2.0", "error": { "code": -32600, "message": "Invalid Request", "data": { "details": "Invalid or missing request id. Notifications are not supported" } } }   Are there any fixes that we could do to forward our logs from Gravityzone into Splunk Cloud?
Recently upgraded (on prem) from 8.2.6 to 9.0.0.1 and now getting this message on my dashboards:  "This dashboard version is missing. Update the dashboard version in source." (The "Learn more" link... See more...
Recently upgraded (on prem) from 8.2.6 to 9.0.0.1 and now getting this message on my dashboards:  "This dashboard version is missing. Update the dashboard version in source." (The "Learn more" link following it fails, like they always have...wish they'd fix that.) One other person that asked this was told to read up on the jQuery 3.5 upgrade -- but that makes no sense, and the documentation didn't tell me what to do. I have to assume it's simply looking for a tag in the xml code, but what and where? thanks!
Hi everyone, My client has indexes with events which are sometimes really large. The problem is that field extraction in such cases doesn't work properly. For example, opening an event shows the wh... See more...
Hi everyone, My client has indexes with events which are sometimes really large. The problem is that field extraction in such cases doesn't work properly. For example, opening an event shows the whole raw event, but fields below it are trimmed. If the field was a few thousands characters long, in the fields view below the event only about a thousand first characters are shown. Moreover, efforts to manipulate such fields produce unspecified results, e.g.,     | eval len_x = len(field_x)     returns 71, although the field is several thousands characters long. Searches targeting such events sometimes fail, e.g., specifying an event with an ID: event_uid=unique_id (a field-value combination present in the event) doesn't return anything, although a less specific search with the same time frame returns that event. We also tried to tackle the problem at the source, i.e., to shorten the field with excessive length before indexing     | eval field_x = if(len(field_x) > 1000, substr(field_x, 1, 1000) . "(oversized field trimmed)", field_x)     but this only trimmed the fields, without adding the text in the brackets. So, since I haven't managed to find it in the documentation, I would like to ask the following: is there a limit for the field length and does it depend on the overall event size? How to deal with such long fields? Thanks and kind regards, Krunoslav Ivesic
Hi Guys, I have a host_blackout.csv, and I want to update the blackout for three hosts(mep1,mep2,mep3) among the 30 hosts I have: 1) the new end_time should be updated to end of next week("08/28/... See more...
Hi Guys, I have a host_blackout.csv, and I want to update the blackout for three hosts(mep1,mep2,mep3) among the 30 hosts I have: 1) the new end_time should be updated to end of next week("08/28/202 11:00"). My output looks like this: end_time host notes start_time 08/18/2022 09:00 mep1 INC000006 08/14/2022 23:00 08/11/2022 09:00 mep2 INC000002 08/11/2022 20:15 08/12/2022 10:00 mep3 INC000003 08/10/2022 12:00 08/10/2022 09:00 mep4 INC000004 08/06/2022 23:00 08/05/2022 09:00 mep5 INC0000012 10/27/2018 00:00 08/05/2022 09:00 mep6 INC00000123 08/03/2022 23:00 08/05/2022 09:00 mep7 INC000002537 10/27/2018 00:00 08/05/2022 09:00 mep8 INC0000011 11/20/2018 00:00 08/05/2022 09:00 mep9   Can you help please?
Hello   After upgrading from and earlier version to 3.0.9, since i saw there were people having the JavaScript issue I was trying to fix, the app isnt creating incidents anymore. I found this in... See more...
Hello   After upgrading from and earlier version to 3.0.9, since i saw there were people having the JavaScript issue I was trying to fix, the app isnt creating incidents anymore. I found this in the alert_manager_scheduler.log which is the only log of alert manager that has logs. I have checked the kvstore, its ready on all shc members but none of the alert metadata is getting created.     2022-08-17 13:42:19,996 WARNING pid="5761" logger="alert_manager_scheduler" message="KV Store is not yet available, sleeping for 1s." (alert_manager_scheduler.py:62)       The alerts run, they try to send, but get this in the splunkd.log     08-17-2022 13:46:05.489 -0400 INFO sendmodalert [25767 AlertNotifierWorker-0] - Invoking modular alert action=alert_manager for search="Widows logging" sid="scheduler__<user>__search__RMD5467d08babc5954da_at_1660758360_111_64D51C26-A29A-41E8-917F-9211B53D56B5" in app="search" owner="<user>" type="saved" 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - Traceback (most recent call last): 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - File "/opt/splunk/etc/apps/alert_manager/bin/alert_manager.py", line 574, in <module> 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - config = getIncidentSettings(payload, settings, search_name, sessionKey) 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - File "/opt/splunk/etc/apps/alert_manager/bin/alert_manager.py", line 484, in getIncidentSettings 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - if ('impact' in result or result['impact'] != ''): 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - KeyError: 'impact' 08-17-2022 13:46:06.142 -0400 INFO sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager - Alert action script completed in duration=651 ms with exit code=1 08-17-2022 13:46:06.142 -0400 WARN sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager - Alert action script returned error code=1 08-17-2022 13:46:06.142 -0400 ERROR SearchScheduler [25767 AlertNotifierWorker-0] - Error in 'sendalert' command: Alert script returned error code 1., search='sendalert alert_manager results_file="/opt/splunk/var/run/splunk/dispatch/scheduler__<user>__search__RMD5467d08babc5954da_at_1660758360_111_64D51C26-A29A-41E8-917F-9211B53D56B5/results.csv.gz" results_link="https://<host>:8000/app/search/@go?sid=scheduler__<user>__search__RMD5467d08babc5954da_at_1660758360_111_64D51C26-A29A-41E8-917F-9211B53D56B5"'       does anyone have any idea what might be going on? Thanks for your assistance
Dear All, I have a pretty bare Splunk Universal Forwarder that was installed at 8.2.5 and had no errors on restart, but when I upgraded it to 9.0.0.1, I started to get the following errors? NOTE: T... See more...
Dear All, I have a pretty bare Splunk Universal Forwarder that was installed at 8.2.5 and had no errors on restart, but when I upgraded it to 9.0.0.1, I started to get the following errors? NOTE: These are all in the system/default files (so not my settings): Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false). Invalid key in stanza [provider:splunk] in /opt/splunkforwarder/etc/system/default/federated.conf, line 20: mode (value: standard). Invalid key in stanza [general] in /opt/splunkforwarder/etc/system/default/federated.conf, line 23: needs_consent (value: true).
Currently I have used a similar query to what is below to plot data on a 24 hour graph. index=mock_index source=mock_source.log param1 param2 param3 | rex field=_raw "Latency: (?<latency>[0-9]+)" ... See more...
Currently I have used a similar query to what is below to plot data on a 24 hour graph. index=mock_index source=mock_source.log param1 param2 param3 | rex field=_raw "Latency: (?<latency>[0-9]+)" | eval time = mvjoin(mvindex(split(_raw, " "), 0, 1), " ") | eval time = strptime(time, "%Y-%m-%d %H:%M:%S,%3N") | table time, latency An example event: 2022-08-16 14:04:34,123 INFO [stuff] Latency: 55 [stuff] Ideally I would like to get latency averages over 5 minute periods, and display the data to a graph where the x-axis labels 30 minute intervals.  Given this goal, is strptime() the best way to manage the timestamps in my events?
Hello! Can I please have help with making a table that allows people to type text that will create a new row in the table? I have made one already, but once someone types text, it does not clear t... See more...
Hello! Can I please have help with making a table that allows people to type text that will create a new row in the table? I have made one already, but once someone types text, it does not clear the input text previously entered. Once someone entered their text and presses submit I want the text boxes to go back to being blank (circled in blue).  Also, Is there a way someone can delete a row in the table after it is added in case they put in the wrong information or it is not relevant anymore? Thank you!!!
I’m working with a kvstore since the Netskope IP information needs updating.  I figured out how to add to it using this SPL     | makeresults | eval Data="aaa.bbb.ccc.ddd/mask" | eval Desc="Net... See more...
I’m working with a kvstore since the Netskope IP information needs updating.  I figured out how to add to it using this SPL     | makeresults | eval Data="aaa.bbb.ccc.ddd/mask" | eval Desc="Netskope" | eval Type="netskope_ip" | outputlookup append=true override_if_empty=false my_kvstore     I found multiple examples of how to delete curl -k -u admin:yourpassword -X DELETE https://localhost:8089/servicesNS/nobody/kvstoretest/storage/collections/data/kvstorecoll/5410be5441... The thing is I can’t find the data path.  I can’t use the above command replacing the path the correct path if I can’t figure out the correct path.  Any suggestions? TIA, Joe
Hello Folks , I have json data in below format. I am looking for a best solution to table list of Keys which can be eventually used for input dropdown in dashboard. output of the table content ne... See more...
Hello Folks , I have json data in below format. I am looking for a best solution to table list of Keys which can be eventually used for input dropdown in dashboard. output of the table content needs to be like below. your help is much appreciated. bzk.f1 bzk.f4 bzk.f8 { [-]    bzk: { [-]      f1: ABC      f4: ABC      f8: ABC } }
Currently using a manual verification of non US logins: sourcetype="o365:management:activity" | iplocation ActorIpAddress | search Country!="United States"  action=success | stats count by User... See more...
Currently using a manual verification of non US logins: sourcetype="o365:management:activity" | iplocation ActorIpAddress | search Country!="United States"  action=success | stats count by UserId, Operation, ActorIpAddress, Country, action | sort -count  I am wanting to create a search that will show failed logins followed by a success for a user regardless of source ip. Thanks.
I'm having issues properly extracting all the fields I'm after from some json.  The logs are from a script that dumps all the AWS Security Groups into a json file that is ingested into Splunk by a UF... See more...
I'm having issues properly extracting all the fields I'm after from some json.  The logs are from a script that dumps all the AWS Security Groups into a json file that is ingested into Splunk by a UF.  Below is a sanitized example of the output of one AWS Security Group.   I've tried various iterations of spath with mvzip, mvindex, mvexpand.  I've also tried to no avail using foreach.  I'm stumped as to how to get Splunk to pull out each instance of CidrIp and Description inside the FromPort.   The end goal is to be able to search for a port or an address and get back all the corresponding info. Example Search: index=something FromPort=22 | table FromPort, CidrIp, Description, ToPort Example Results FromPort, CidrIp, Description, ToPort 22, 10.10.10.1, Server01 SSH rule, 22 22, 10.10.10.2, Server 002 inbound , 22 etc....   Right now my extracting the fields only results in the first field for each rule. When working correctly it would look like this and would contain all the rules in the log.     | makeresults | eval _raw="{ \"Description\": \"Rules for server\", \"GroupId\": \"sg-02d3a65ece83ba3a98\", \"GroupName\": \"Fake group name\", \"IpPermissions\": [ { \"FromPort\": 22, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.59/32\", \"Description\": \"Monitoring App - SSH\" }, { \"CidrIp\": \"10.64.77.24/32\", \"Description\": \"Monitoring App - SSH\" }, { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Some Host - SSH\" }, { \"CidrIp\": \"10.64.77.11/32\", \"Description\": \"Monitoring App - SSH\" }, { \"CidrIp\": \"10.64.77.136/32\", \"Description\": \"SSH\" }, { \"CidrIp\": \"10.64.77.171/32\", \"Description\": \"SSH\" }, { \"CidrIp\": \"10.64.77.37/32\", \"Description\": \"Monitoring App - SSH\" }, { \"CidrIp\": \"10.64.77.174/32\", \"Description\": \"Server003\" }, { \"CidrIp\": \"10.64.77.154/32\", \"Description\": \"Server004\" }, { \"CidrIp\": \"10.226.109.245/32\", \"Description\": \"Server to Server\" }, { \"CidrIp\": \"10.226.109.157/32\", \"Description\": \"Another server to other stuff\" }, { \"CidrIp\": \"10.226.109.172/32\", \"Description\": \"Another server to other stuff\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 22, \"UserIdGroupPairs\": [] }, { \"FromPort\": 49763, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.59/32\", \"Description\": \"Monitoring - Other Ports\" }, { \"CidrIp\": \"10.64.77.24/32\", \"Description\": \"Monitoring - Other Ports\" }, { \"CidrIp\": \"10.64.77.37/32\", \"Description\": \"Monitoring - Other Ports\" }, { \"CidrIp\": \"10.64.77.11/32\", \"Description\": \"Monitoring - Other Ports\" }, { \"CidrIp\": \"10.226.109.157/32\", \"Description\": \"Over here to over there\" }, { \"CidrIp\": \"10.226.109.172/32\", \"Description\": \"Over here to over there\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 35226, \"UserIdGroupPairs\": [] }, { \"FromPort\": 139, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Server 007 - Netbios\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 139, \"UserIdGroupPairs\": [] }, { \"FromPort\": 135, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Server 007 - DCOM\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 135, \"UserIdGroupPairs\": [] }, { \"FromPort\": 445, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Server 007 - MS-DS\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 445, \"UserIdGroupPairs\": [] }, { \"FromPort\": 443, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Server 007 - HTTPS\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 443, \"UserIdGroupPairs\": [] }, { \"FromPort\": -1, \"IpProtocol\": \"icmp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.59/32\", \"Description\": \"Monitoring Server - ICMP\" }, { \"CidrIp\": \"10.64.77.24/32\", \"Description\": \"Ping\" }, { \"CidrIp\": \"10.64.77.11/32\", \"Description\": \"Monitoring Server - ICMP\" }, { \"CidrIp\": \"10.64.77.37/32\", \"Description\": \"Monitoring Server - ICMP\" }, { \"CidrIp\": \"10.226.109.157/32\", \"Description\": \"Over here to over there\" }, { \"CidrIp\": \"10.226.109.172/32\", \"Description\": \"Over here to over there\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": -1, \"UserIdGroupPairs\": [] }, { \"FromPort\": 1024, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Server 007 - High Ports\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 65535, \"UserIdGroupPairs\": [] } ], \"IpPermissionsEgress\": [ { \"IpProtocol\": \"-1\", \"IpRanges\": [ { \"CidrIp\": \"0.0.0.0/0\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"UserIdGroupPairs\": [] } ], \"OwnerId\": \"223310898711\", \"VpcId\": \"vpc-192ac32be1b1a987c\" }" | spath IpPermissions{}.FromPort output=a_FromPort | spath IpPermissions{}.IpProtocol output=a_IpProtocol | spath IpPermissions{}.IpRanges{}.CidrIp output=a_CidrIp | spath IpPermissions{}.IpRanges{}.Description output=a_Description | spath IpPermissions{}.ToPort output=a_ToPort | eval a_zipped=mvzip(mvzip(mvzip(mvzip(a_FromPort, a_IpProtocol), a_CidrIp), a_Description), a_ToPort) | mvexpand a_zipped | eval b_FromPort=mvindex(split(a_zipped,","),0), b_IpProtocol=mvindex(split(a_zipped,","),1), b_CidrIp=mvindex(split(a_zipped,","),2), b_Description=mvindex(split(a_zipped,","),3), b_ToPort=mvindex(split(a_zipped,","),4) | table b_FromPort, b_IpProtocol, b_CidrIp, b_Description, b_ToPort, a_zipped