All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have some data coming in with multiple date formats in the same field, and I'm having trouble reporting on these dates. I'd like to keep the dates consistent - how do I create a statement to change... See more...
I have some data coming in with multiple date formats in the same field, and I'm having trouble reporting on these dates. I'd like to keep the dates consistent - how do I create a statement to change just the dates that are in the undesirable format? What I have: DateAdded 2021-11-03 2/15/2022 1/13/2023   What I would like: DateAdded 2021-11-03 2022-02-15 2023-01-13
Hello, One of these works, One does not 1.] index="conmon" earliest>="01/01/2022:00:00:000" source="AwesomeCloudPOAM.xml" | rex field=_raw "<version>(?<version>[^<]+)</version>" | table version ,... See more...
Hello, One of these works, One does not 1.] index="conmon" earliest>="01/01/2022:00:00:000" source="AwesomeCloudPOAM.xml" | rex field=_raw "<version>(?<version>[^<]+)</version>" | table version , which works fine and brings back value from the xml node.  however, this search fails every time: 2.] index="conmon" earliest>="01/01/2022:00:00:000" source="AwesomeCloudPOAM.xml" | rex field=_raw "<oscal-version>(?<oscal-version>[^<]+)</oscal-version>" | table oscal-version with this error: Error in 'rex' command: Encountered the following error while compiling the regex '<oscal-version>(?<oscal-version>[^<]+)</oscal-version>': Regex: syntax error in subpattern name (missing terminator).   Can anyone help me?  
I have the following query created:     index=my_idx source=mySource | stats count by sourceTopic     Which gives me result like:     MY/EVENTS/EV1/TYPE1 | 16170 MY/EVENTS/EV1/TYPE2... See more...
I have the following query created:     index=my_idx source=mySource | stats count by sourceTopic     Which gives me result like:     MY/EVENTS/EV1/TYPE1 | 16170 MY/EVENTS/EV1/TYPE2 | 3558 MY/EVENTS/EV1/TYPE3 | 419 MY/EVENTS/EV2/TYPE1 | 123391 MY/EVENTS/EV2/TYPE2 | 16734 MY/EVENTS/EV2/TYPE3 | 880     But I would need my result like:     TYPE1 EV1 | 16170 TYPE2 EV1 | 3558 TYPE3 EV1 | 419 TYPE1 EV2 | 123391 TYPE2 EV2 | 16734 TYPE3 EV2 | 880     How would I achieve this? How can I rename my values that I obtain in stats?
So I have an issue that I cant quite figure out the proper syntax for. Im parsing logs for an ERROR message. Using Search ERROR works fine, but i was wanting to regex the actual error code. Im creati... See more...
So I have an issue that I cant quite figure out the proper syntax for. Im parsing logs for an ERROR message. Using Search ERROR works fine, but i was wanting to regex the actual error code. Im creating my own regex, since some of the logs that come in contain different amount of lines, the usual regex cant pick up on all of them.  Some of the example logs are      Sat Feb 25 2023 15:11:04 ERROR: Could not obtain time sample from 10.11.111.111 (10.11.111.111:123) using NTP (unauthenticated); error 10060: Timed out Sat Feb 25 2023 15:11:04 Info: No server provided a usable time sample; discovering time servers to use       Wed Jan 18 2023 15:27:32 ERROR: Could not obtain time sample from 10.11.111.111 (10.11.111.111:123) using NTP (unauthenticated); error 10054: Connection reset Wed Jan 18 2023 15:27:32 Info: Summary: 1 sample; delta is -0.0005745 seconds, latency +0.0013496 Wed Jan 18 2023 15:27:32 Info: Alignment of -0.0005745 seconds in progress; +0.999738 secs/second (156209/156250) for 1996 ms using default method; net change -0.0005748 secs Wed Jan 18 2023 15:27:34 Info: Local clock aligned backward to match 10.82.39.229; delta -0.0005745 seconds, protocol NTP, latency 0.0013496 seconds Wed Jan 18 2023 15:27:34 Info: Next time check due in 30 seconds (fixed schedule)   My regex works and finds what I need, from the start of ERROR to right before the next day of the week. The regex that I used in regExr and worked on those example logs is:     (?<ERROR>(ERROR:).*(?= Sun| Mon| Tue| Wed| Thu| Fri| Sat))   I was also using the nongreedy flag on regexr.  But I was told that Splunk was already nongreedy.  My issue is that when I use the rex command with that, it won't pick it up. I used a shorter iteration of that to just get and store the ERROR part, and it did grab and store the string as an ERROR field. I was following along with this guide: http://karunsubramanian.com/splunk/how-to-use-rex-command-to-extract-fields-in-splunk/ I was just wondering how to properly implement the above working regex in Splunk syntax to grab and store the error code and short description in the field ERROR. Thank you for any guidance. 
Hi Team, We are planning to integrate our Splunk Web Solution with Solarwinds and Servicenow.  Please let us the cost effective solution and any process documents/knowledge artifacts available to... See more...
Hi Team, We are planning to integrate our Splunk Web Solution with Solarwinds and Servicenow.  Please let us the cost effective solution and any process documents/knowledge artifacts available to integrate these system with Splunk.    Thanks. Vaibhav
I was trying to ingest data from a spreadsheet and picked up the following instruction  but cant find LookUp Editor even from the link on the help page: Extract from help page: "First, I highly r... See more...
I was trying to ingest data from a spreadsheet and picked up the following instruction  but cant find LookUp Editor even from the link on the help page: Extract from help page: "First, I highly recommend checking out the lookup editor app. That app is free and it allows you to make new lookup files and edit them in an nice interface. If you want to import a spreadsheet from Excel, all you have to do is save it as a CSV and import it via the app. To do so, open the Lookup Editor and click the “New” button. Next, click “import from CSV file” at the top right and select your file. This will import the contents of the lookup file into the view. Press save to persist it."
Hi All i have been trying to capture the error split up and ratio from the following sample log event which probably needs a complex regex        { [-] cluster_id: us-prod-az-200 kuber... See more...
Hi All i have been trying to capture the error split up and ratio from the following sample log event which probably needs a complex regex        { [-] cluster_id: us-prod-az-200 kubernetes: { [+] } log: { [-] appVersion: 0.1.326 envType: prod environment: prod-txn log: Request and Response, consumerId=xxxxxx-xxxx-xxxx, duration=144, correlationId=0-0-0, requestType=ItemDetails, requestIds=43947812:212001513:217953998:55079684:748708658:42068997:16875745:392480759:138021380:49984819:3933145:54016598:500257082:702903612:50179695:54056450, reqOfferIds=,requestPrimaryMap=, storeIds=0000, status=PARTIAL, responseSize=16, isCustomerAddressPresent=true, extPostalCode=null, fulfillmentIntent=, error=138021380=404.IMS.STORE100;500.IMS.PRICE.103:42068997=400.IMS.STORE.100:3933145=500.IMS.OFFER.100;404.IMS.PRICE.103:212001513=404.IMS.STORE.100:217953998=404.IMS.STORE.100;400.IMS.100:500257082=404.IMS.STORE.100, missingBadgeItems=138021380:702903612:55079684:49984819:54056450:3933145:217953998:392480759, pickupStoreIds= logLine: 93 methodName: Utils serverName: 11.16.251.37 time: 2023-02-27 14:43:33.999 timeStamp: 1677509013999 type: INFO } time: 2023-02-27T14:43:33.999844088Z       each event is unique with error attribute is multivalued field with delimiters for each id(only incase of error) or null as shown below, ex:  error=138021380=404.IMS.STORE100;500.IMS.PRICE.103:42068997=400.IMS.STORE.100:3933145=500.IMS.OFFER.100;404.IMS.PRICE.103:212001513=404.IMS.STORE.100:217953998=404.IMS.STORE.100;400.IMS.100:500257082=404.IMS.STORE.100, OR error=, my requirement is to compute each error code splitup and error ratio in a tabular fashion ratio=each error code count/total responseSize here responseSize is the number of ids passed in each request per event error count responseSize ratio 404.IMS.STORE100 aggregation of the error aggregate of responseSize round((count/responseSize)*100,2) 500.IMS.PRICE.103 aggregation of the error aggregate of responseSize   can someone please help to find a better way to have the error breakdown with ratio as per the above requirement i was trying to segregate the error split up and aggregating the responseSize but the search is not giving expected results while tabulating,       index=<index name> "log.envType"=prod "log.methodName”=“Utils” 
 | rex field=_raw "responseSize=*(?<responseSize>.+?),"
 | rex field=_raw ", error=*(?<errorMap>.+), missingBadgeItems" | eval errors0=replace(errorMap, "=", ";") | eval errors1=split(errors0,":") | rex field=errors1 "(?<errorCodes>.*)" | mvexpand errorCodes | eval code=split(errorCodes, ";") | mvexpand code | table code,responseSize       can someone please help..Thanks             
We have a rule engine that assigns category codes to items. The category codes are assigned per location. We want to extract a statistical data from the log to show how many messages were published f... See more...
We have a rule engine that assigns category codes to items. The category codes are assigned per location. We want to extract a statistical data from the log to show how many messages were published for each location. For example We want to get a result like below from below message Location code count ABC 2 XYZ 1 DEF 1 IJK 2   message #1:  {"Item Id": "1", "locationCategoryCodes": [{"categoryCodes": [{"categoryCode": "CAT_1", "ruleID": ["138563"]}], "locationCode": "ABC"}, {"categoryCodes": [{"categoryCode": "CAT_1", "ruleID": ["138563"]}], "locationCode": "XYZ"}, {"categoryCodes": [{"categoryCode": "CAT_2", "ruleID": ["138561"]}], "locationCode": "DEF"}, {"categoryCodes": [{"categoryCode": "CAT_3", "ruleID": ["138614"]}], "locationCode": "IJK"}], "timestamp": "2023-01-27T00:10:32.367 +0000"} message #2:  {"Item Id": "2", "locationCategoryCodes": [{"categoryCodes": [{"categoryCode": "CAT_1", "ruleID": ["138563"]}], "locationCode": "ABC"}, {"categoryCodes": [{"categoryCode": "CAT_3", "ruleID": ["138614"]}], "locationCode": "IJK"}], "timestamp": "2023-01-27T00:10:32.367 +0000"} Thanks Anirban
Hi All, I am working on the dashboard which makes use of trellis layout. Below is the query I am using :     index="_internal" sourcetype="test" source="*test.log*" | rename host as ipaddre... See more...
Hi All, I am working on the dashboard which makes use of trellis layout. Below is the query I am using :     index="_internal" sourcetype="test" source="*test.log*" | rename host as ipaddress | join ipaddress [ |inputlookup activemachines.csv | fields ipaddress] | stats dc(ipaddress) as instances_sentinel_installed | appendcols [ |inputlookup activemachines.csv | stats count(ipaddress) as total_machines ] | eval Percent=round((instances_sentinel_installed/total_machines)*100,2) | rename total_machines as "Total Active Machines" instances_sentinel_installed as "Instances with Sentinel Installed" | table "Total Active Machines" "Instances with Sentinel Installed" Percent     The trellis layout looks like I have shared in screenshot. 2129 is "Total Active Machines" 502 is "Instances with Sentinel Installed" 24 is Percentage I have to use field name on click of the trellis. ex : If I click on 2129 I should get this "Total Active Machines" fieldname and this one I am using in the custom link. I tried this and few more like click.name, click.name2, click.value and so. <drilldown> <set token="tokName">$trellis.value$</set> <link target="_blank">search?asdasdasd$tokName$&amp;earliest=-60m%40m&amp;latest=now</link> </drilldown> With trellis.value, I am getting field value i.e 2129 but I want fieldname. Please anyone help me on this. I appreciate your response. Thanks in advance, NVP
Hi ,      I have a splunk log where we have End time and time to Serve Requst (in Millisec). i want calculate Start time by subtracting End time - time to Serve Requst (in Millisec) . Can you pl... See more...
Hi ,      I have a splunk log where we have End time and time to Serve Requst (in Millisec). i want calculate Start time by subtracting End time - time to Serve Requst (in Millisec) . Can you please help me with the query which will help me to achieve this requirement.   Example: End time -2023-02-27 10:46:13.559 time to server Request - 1131 (milliSec)    
I was doing a testing connectivity and my node 2 the teste connectivity passed, but on node 3 i got a this error "Module import of the connector py module failed. Please look at ${PHANTOM_LOG_DIR}/*_... See more...
I was doing a testing connectivity and my node 2 the teste connectivity passed, but on node 3 i got a this error "Module import of the connector py module failed. Please look at ${PHANTOM_LOG_DIR}/*_spawn.log (after enabling DEBUG Logs) for more information." i got some outputs from the logs. spawn3.cpp : 575 : SID: fe7adfaf-1ff2-4994-b750-4c9024fd0a92 Created the Py Module name for awssts_connector Feb 25 05:36:06 SPAWN3[18582]: TID:18582 : ERROR: SPAWN3 : ../../include/py3_err.h : 72 : error_type: 0x7fcdd1930560, the_error: 0x7fcdd205c828, the_traceback: (nil) Feb 25 05:36:06 SPAWN3[18582]: TID:18582 : ERROR: SPAWN3 : spawn3.cpp : 584 : Error occurred PyImport_Import No module named 'awssts_connector' Feb 25 05:36:06 SPAWN3[18582]: TID:18582 : DEBUG: SPAWN3 : json_processor.cpp : 552 : Got the following JSON after py module execution:
hi, Spunk universal forwarder version 9..0.3 running at 100% cpu on Linux even after a restart is their a known issue/workaround for this?   thanks,   joe
According to the Splunk documentation on the attribute [splunktcp-ssl:<port>] it states that:  * Use this stanza type if you are receiving encrypted, parsed data from a forwarder." UFs cook, but... See more...
According to the Splunk documentation on the attribute [splunktcp-ssl:<port>] it states that:  * Use this stanza type if you are receiving encrypted, parsed data from a forwarder." UFs cook, but do not 'parse' the data.  Thus, is this effective to send encrypted data from the UF to indexers?
Hi all. I'm using splunk 9.0.2 and db connect 3.11.1. Sometimes my connection from db connect to my database fails. so I'm wondering if is it possible to automatic retry the missed query or not? ... See more...
Hi all. I'm using splunk 9.0.2 and db connect 3.11.1. Sometimes my connection from db connect to my database fails. so I'm wondering if is it possible to automatic retry the missed query or not? My query is depend on time to retrive last 5 min entry rows. so the missed query caused data loss. My query: SELECT * FROM DB.MY_TABLE1 WHERE insert_date >= SYSDATE - INTERVAL 5 MINUTE
we are using splunk cloud and getting juniper OS logs as syslogs from Heavy forwarder to splunk cloud. but in splunk cloud fields are not breaking  we have installed in juniper addon in splunk clou... See more...
we are using splunk cloud and getting juniper OS logs as syslogs from Heavy forwarder to splunk cloud. but in splunk cloud fields are not breaking  we have installed in juniper addon in splunk cloud  do this need should there at Heavy forwarder end so that parsing will happen
Hello,   Is there any way to send scheduled reports (csv) to a specific Google Drive location?   OR   Is there any add-on available for sharing csv reports for Google Data Studio?
Hi Team, If the file is too old for eg: file is created in 2022 and further no updates in the file, so in this case will events be visible for that source file to the index?  This will be first t... See more...
Hi Team, If the file is too old for eg: file is created in 2022 and further no updates in the file, so in this case will events be visible for that source file to the index?  This will be first time ingestion to the Splunk for the source file. If can be read then what additional parameters should be applied. 
Im using DBConnect 3.11.0 and cant add or change the description of Inputs via the GUI. It appears since ive upgraded from version 3.8 I can click inside the Textarea and the cursor is shown as n... See more...
Im using DBConnect 3.11.0 and cant add or change the description of Inputs via the GUI. It appears since ive upgraded from version 3.8 I can click inside the Textarea and the cursor is shown as normal, but no Keystroke is taken and i cant even delete text out of it. As workaround i`ll use config explorer with debug/refresh but i dont want to use it everytime when creating the description for a new input or changing the description of an existing input. Is this a known issue? Does anyone alse  have this behavior too?
I'm trying to add a lookup to enrich results returned from a 'simple' search.  The search command I'm using [and I have limited to one key/value pair] is: - index=ee_commercialbankingeforms_pcf "*L... See more...
I'm trying to add a lookup to enrich results returned from a 'simple' search.  The search command I'm using [and I have limited to one key/value pair] is: - index=ee_commercialbankingeforms_pcf "*LEVEL=WARN*" | rex "^\S+\s(?<microService>\S+).*MESSAGE=(?<message>.+)" | bucket _time span=day | stats count by microService, message | lookup [ {JIRASummary: "No JWT found on UserPrincipal and no custom JWT claims configured. No nested JWT will be sent in downstream requests!", JIRA: "CBE-968"} ] JIRASummary AS message OUTPUT JIRA ...but I keep seeing following error... Error in 'SearchParser': Missing a search command before '{'. Error at position '192' of search query 'search index=ee_commercialbankingeforms_pcf "*LEVE...{snipped} {errorcontext = lookup [ {JIRASummar}'. Can someone explain the error that I see? Regards Mick
Splunk search events returns json format log data. I want to remove a particular key:value pair since the value of this key is huge (in terms of length) and unnecessary. How can I do so. sample lo... See more...
Splunk search events returns json format log data. I want to remove a particular key:value pair since the value of this key is huge (in terms of length) and unnecessary. How can I do so. sample log data: { "abcd1": "asd", "abcd2": [], "abcd3": true, "toBeRemoved": [{ "abcd8": 234, "abcd9": [{ "abcd10": "asd234" }], "abcd11": "asdasd" }], "abcd4": 324.234, "abcd5": "dfsad dfsdf", "abcd6": 0, "abcd7": "asfsdf" } The key:value pair to be removed has been marked in bold. ! NOTE THIS IS FORMATTED DATA, FIELDS CAN HAVE STRINGS, NUMBERS, BOTH, LISTS, ETC !