All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can either start from the beginning adding subsequent commands to see when your results stop being what you wanted them to be or from the end - removing commands one by one untill your intermedia... See more...
You can either start from the beginning adding subsequent commands to see when your results stop being what you wanted them to be or from the end - removing commands one by one untill your intermediate results start making sense.
So I have the fields that I want to subtract.  One is SequenceNumber_Comment (ex 211) and SequenceNumber_Withdrawal (ex 210). I want to subtract the values and put them in the variable match. Below i... See more...
So I have the fields that I want to subtract.  One is SequenceNumber_Comment (ex 211) and SequenceNumber_Withdrawal (ex 210). I want to subtract the values and put them in the variable match. Below is the SPLI have but I get an empty value. | eval match = tonumber(SequenceNumber_Comment) - tonumber(SequenceNumber_Withdrawal)   What do I do???  Thank you!!!
No value is getting displayed in TotalTrans field when I am running the given query.
Ok. This is a windows event. Normal approach to this kind of events would be to ingest them as XML using renderXml=true setting in input(s) and use TA_windows to parse them.
Yes.  The data is organized in KV pairs.  What is different is that it uses two different connectors, "=" and ":".  It also does not quote the value.  So, I am not sure if automatic extraction is fea... See more...
Yes.  The data is organized in KV pairs.  What is different is that it uses two different connectors, "=" and ":".  It also does not quote the value.  So, I am not sure if automatic extraction is feasible.  But at search time, you can simply do   | kv pairdelim=" " kvdelim="=:"   Your sample data will give the following fields: field name field value ComputerName sacreblue Domaine_du_compte AUTORITE NT EventCode 4672 EventType 0 ID de sécurité AUTORITE NT\Système ID_d_ouverture_de_session 0x3e7 Keywords Succès de l'audit LogName Security Message Privilèges spéciaux attribués à la nouvelle ouverture de session. Nom_du_compte Système OpCode Informations Privilèges SeAssignPrimaryTokenPrivilege RecordNumber 2746 SourceName Microsoft Windows security auditing. TaskCategory Ouverture de session spéciale Type Information Here is an emulation that you can play with and compare with real data         | makeresults | eval _raw = "04/29/2014 02:50:23 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4672 EventType=0 Type=Information ComputerName=sacreblue TaskCategory=Ouverture de session spéciale OpCode=Informations RecordNumber=2746 Keywords=Succès de l'audit Message=Privilèges spéciaux attribués à la nouvelle ouverture de session. Sujet : ID de sécurité : AUTORITE NT\Système Nom du compte : Système Domaine du compte : AUTORITE NT ID d'ouverture de session : 0x3e7 Privilèges : SeAssignPrimaryTokenPrivilege SeTcbPrivilege SeSecurityPrivilege SeTakeOwnershipPrivilege SeLoadDriverPrivilege SeBackupPrivilege SeRestorePrivilege SeDebugPrivilege SeAuditPrivilege SeSystemEnvironmentPrivilege SeImpersonatePrivilege" ``` data emulation above ```         I still have a question about your conversion to XML.  Do you mean that you use an external tool to convert that raw text into XML before ingesting into Splunk?  If you have this option, why not convert the raw text into JSON for which Splunk has better support?
This is the new link to the documentation. https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Configuredistributedsearch#Use_the_CLI
I want to display total transactions without where condition in result with other fields which has specific where condition, for.eg  | eval totalResponseTime=round(requestTimeinSec*1000), | conve... See more...
I want to display total transactions without where condition in result with other fields which has specific where condition, for.eg  | eval totalResponseTime=round(requestTimeinSec*1000), | convert num("requestTimeinSec") | rangemap field="totalResponseTime" "totalResponseTime"=0-3000 | rename range as RangetotalResponseTime | eval totalResponseTimeabv3sec=round(requestTimeinSec*1000) | rangemap field="totalResponseTimeabv3sec" "totalResponseTimeabv3sec"=3001-60000 | rename range as RangetotalResponseTimeabv3sec | eval Product=case( (like(proxyUri,"URI1") AND like(methodName,"POST"))OR (like(proxyUri,"URI2") AND like(methodName,"GET"))OR (like(proxyUri,"URI3") AND like(methodName,"GET")),"ABC") | bin span=5m _time | stats count(totalResponseTime) as TotalTrans count(eval(RangetotalResponseTime="totalResponseTime")) as TS<3S count(eval(RangetotalResponseTimeabv3sec="totalResponseTimeabv3sec")) as TS>3SS by Product URI methodName _time | eval TS<XS=case( Product="ABC",'TS<3S') | eval TS>3S = 'TotalTrans'-'TS<XS' | eval SLI=case(Product="ABC",round('TS<3S'/TotalTrans*100,4)) | rename methodName AS Method | where (Product="ABC") and (SLI<99) | stats sum(TS>3S) As AvgImpact count(URI) as DataOutage by Product URI Method | fields Product URI Method TotalTrans SLI AvgImpact DataOutage | sort Product URI Method
I just had this same problem with 6.4.0 and found a workaround. The Tenable docs for Add-On describe a setting "Verify SSL Certificate" that for whatever reason is not visible from the UI. https://... See more...
I just had this same problem with 6.4.0 and found a workaround. The Tenable docs for Add-On describe a setting "Verify SSL Certificate" that for whatever reason is not visible from the UI. https://docs.tenable.com/integrations/Splunk/Content/Splunk2/ConfigureTenablescCertificatesS2.htm Modify /opt/splunk/etc/apps/TA-tenable/bin/tenable_consts.py Change True to False where applicable to your deployment. Save and close the file. No restart required. verify_ssl_for_ot = False verify_ssl_for_sc_cert = False verify_ssl_for_sc_api_key = False verify_ssl_for_sc_creds = False  
The only thing that could work (but I haven't done this myself) is to use ingest actions. You'd need to use ingest actions to rewrite index on already parsed data. But the caveat is that I'm not sure... See more...
The only thing that could work (but I haven't done this myself) is to use ingest actions. You'd need to use ingest actions to rewrite index on already parsed data. But the caveat is that I'm not sure if you can do it as a "default" action or if you have to define it per every sourcetype separately.
Thanks , I am able to get error count now , could you please let me know how to get this value in python code .if I run the code I am getting events instead of statistics , How to get statitics in th... See more...
Thanks , I am able to get error count now , could you please let me know how to get this value in python code .if I run the code I am getting events instead of statistics , How to get statitics in the code  payload=f'search index="prod_k8s_onprem_vvvb_nnnn" "k8s.namespace.name"="apl-siii-iiiii" "k8s.container.name"="uuuu-dss-prog" NOT k8s.container.name=istio-proxy NOT log.level IN(DEBUG,INFO) (error OR exception)(earliest="07/25/2024:11:30:00" latest="07/25/2024:12:30:00")\n' '| addinfo\n' '| bin _time span=5m@m\n' '| stats count(eval(log.level="ERROR")) as error_count by _time\n' '| eventstats stdev(error_count)' print(payload) payload_escaped = f'search={urllib.parse.quote(payload)}' headers = { 'Authorization': f'Bearer {splunk_token}', 'Content-Type': 'application/x-www-form-urlencoded' } url = f'https://{splunk_host}:{splunk_port}/services/search/jobs/export?output_mode=json' response = requests.request("POST", url, headers=headers, data=payload_escaped, verify=False) print(f'{response.status_code=}') txt = response.text if response.status_code==200: json_txt = f'[\n{txt}]' os.makedirs('data', exist_ok=True) with open("data/output_deploy.json", "w") as f: f.write(json_txt) f.close() else: print(txt)
If you want to have your array from the "queue_mem_check" field ingested as separate events I'm afraid you'll have to preprocess it with an external tool before getting this data to Splunk.
Either I misunderstand something or you'd have your lookup "full" only for 1/4th of a day (or you're re-creating it whole in which case it's confusing why you want to have several copies of the same ... See more...
Either I misunderstand something or you'd have your lookup "full" only for 1/4th of a day (or you're re-creating it whole in which case it's confusing why you want to have several copies of the same data). What problem are you trying to solve?
Yeah, you're right. It was the other-way sawtooth. It looks strange. Are you sure you don't have any network-level issues? And don't you see any other interesting stuff in _internal (outside of the M... See more...
Yeah, you're right. It was the other-way sawtooth. It looks strange. Are you sure you don't have any network-level issues? And don't you see any other interesting stuff in _internal (outside of the Metrics component) for this forwarder?  
Just extract the content of "msg" into a new field, then apply spath   | rex "msg=(?<msg>.+)" | spath input=msg   Here is the output from your sample data meteoHumidity meteoRainlasthour me... See more...
Just extract the content of "msg" into a new field, then apply spath   | rex "msg=(?<msg>.+)" | spath input=msg   Here is the output from your sample data meteoHumidity meteoRainlasthour meteoTemp meteoWindDirection meteoWindSpeed meteolunaPercent msg 64 0 17.9 SW 6.04 67.3 {"meteoTemp":17.9,"meteoHumidity":64,"meteoRainlasthour":0,"meteoWindSpeed":6.04,"meteoWindDirection":"SW","meteolunarPercent":67.3} This is an emulation for you to play with and compare with real data.   | makeresults | eval _raw = "Fri Jul 26 15:24:46 BST 2024 name=mqtt_msg_received event_id= topic=meteobridge msg={\"meteoTemp\":17.9,\"meteoHumidity\":64,\"meteoRainlasthour\":0,\"meteoWindSpeed\":6.04,\"meteoWindDirection\":\"SW\",\"meteolunarPercent\":67.3}" ``` data emulation above ```    
Hi @Zer0F8th, you have to start from the main search, please try this: | tstats count WHERE index=* earliest=-7d BY host | append [ | inputlookup lookup.csv | eval count=0 | ... See more...
Hi @Zer0F8th, you have to start from the main search, please try this: | tstats count WHERE index=* earliest=-7d BY host | append [ | inputlookup lookup.csv | eval count=0 | fields FQDN count ] | append [ | inputlookup lookup.csv | eval count=0 | fields IP count ] | append [ | inputlookup lookup.csv | eval count=0 | fields Hostname count ] | eval host=coalesce(host, FQDN, IP, Hostname) | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe  
Hi, complete Splunk beginner here, so sorry it this is a stupid question. I'm trying to chart some data that I'm pulling from an MQTT broker. The Splunk  MQTT Modular Input app is doing its thing a... See more...
Hi, complete Splunk beginner here, so sorry it this is a stupid question. I'm trying to chart some data that I'm pulling from an MQTT broker. The Splunk  MQTT Modular Input app is doing its thing and data is arriving every 5 minutes. Using the most basic query  (  source="mqtt://MeteoMQTT"  ) gives these results:   Fri Jul 26 15:24:46 BST 2024 name=mqtt_msg_received event_id= topic=meteobridge msg={"meteoTemp":17.9,"meteoHumidity":64,"meteoRainlasthour":0,"meteoWindSpeed":6.04,"meteoWindDirection":"SW","meteolunarPercent":67.3}   What I really want to do though is to break out the values from the most recent data poll into separate "elements" that can then be added to a dashboard. I tried using the spath command: source="mqtt://MeteoMQTT" | spath output=meteoTemp path=meteoTemp But that just returned the whole object again. So, how can i parse out the different values (meteoTemp, meteoHumidity, meteoRainlasthour, etc), so that i can add their most recent values as individual dashboard elements please? TIA.
Hi All, So I have a lookup table with the following fields: FQDN, Hostname, and IP. I need to check to see which of these assets in the lookup table are logging (about 700 assets) and which aren't... See more...
Hi All, So I have a lookup table with the following fields: FQDN, Hostname, and IP. I need to check to see which of these assets in the lookup table are logging (about 700 assets) and which aren't in the last 7 days. I used the following basic SPL to get a list of hosts which are logging:   | tstats earliest(_time) latest(_time) count where index=* earliest=-7d by host   The issue I'm having is that the host output in the above SPL comes through in different formats, it may be a FQDN or a Hostname, or an IP address. How do I use my lookup table to check if the assets in the lookup table are logging without having to do 3 joins on FQDN, Hostname and IP? Here was a SPL query that somewhat worked but it is too inefficient:   | inputlookup lookup.csv | eval FQDN=lower(FQDN) | eval Hostname=lower(Hostname) | join type=left FQDN [ |tstats latest(_time) as lastTime where index=* earliest=-7d by host | rename host as FQDN | eval FQDN=lower(FQDN) | eval Days_Since_Last_Log = round((now() - lastTime) / 86400) | convert ctime(lastTime) ] | join type=left Hostname [ |tstats latest(_time) as lastTime where index=* earliest=-7d by host | rename host as Hostname | eval Hostname=lower(Hostname) | eval Days_Since_Last_Log = round((now() - lastTime) / 86400) | convert ctime(lastTime) ] | join type=left IP[ |tstats latest(_time) as lastTime where index=* earliest=-7d by host | rename host as IP | eval IP=lower(IP) | eval Days_Since_Last_Log = round((now() - lastTime) / 86400) | convert ctime(lastTime) ] | rename lastTime as LastTime | fillnull value="NULL" | table FQDN, Hostname, IP, Serial, LastTime, Days_Since_Last_Log   I'm somewhat new to Splunk so thank you for the help!
Hi @elend , if the Time in dashboardA is defined in a Time imput called e.g. "Time", so the tokens are a called $Time.earliest$ and $Time.latest$, you can pass then in the drilldown url: earliest=$... See more...
Hi @elend , if the Time in dashboardA is defined in a Time imput called e.g. "Time", so the tokens are a called $Time.earliest$ and $Time.latest$, you can pass then in the drilldown url: earliest=$Time.earliest$&amp;latest=$Time.latest$ Ciao. Giuseppe
So the premise is that I constructed two dashboards: dashboard A as an overview and dashboard B as details. Then, on dashboard A, I configured one of the displays to have an on-click trigger that con... See more...
So the premise is that I constructed two dashboards: dashboard A as an overview and dashboard B as details. Then, on dashboard A, I configured one of the displays to have an on-click trigger that connects to dashboard B. However, the global time condition on dashboard A cannot be connected to dashboard B.   is it possible to make the time dynamic on dashboard B?