All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yeah, you're right. It was the other-way sawtooth. It looks strange. Are you sure you don't have any network-level issues? And don't you see any other interesting stuff in _internal (outside of the M... See more...
Yeah, you're right. It was the other-way sawtooth. It looks strange. Are you sure you don't have any network-level issues? And don't you see any other interesting stuff in _internal (outside of the Metrics component) for this forwarder?  
Just extract the content of "msg" into a new field, then apply spath   | rex "msg=(?<msg>.+)" | spath input=msg   Here is the output from your sample data meteoHumidity meteoRainlasthour me... See more...
Just extract the content of "msg" into a new field, then apply spath   | rex "msg=(?<msg>.+)" | spath input=msg   Here is the output from your sample data meteoHumidity meteoRainlasthour meteoTemp meteoWindDirection meteoWindSpeed meteolunaPercent msg 64 0 17.9 SW 6.04 67.3 {"meteoTemp":17.9,"meteoHumidity":64,"meteoRainlasthour":0,"meteoWindSpeed":6.04,"meteoWindDirection":"SW","meteolunarPercent":67.3} This is an emulation for you to play with and compare with real data.   | makeresults | eval _raw = "Fri Jul 26 15:24:46 BST 2024 name=mqtt_msg_received event_id= topic=meteobridge msg={\"meteoTemp\":17.9,\"meteoHumidity\":64,\"meteoRainlasthour\":0,\"meteoWindSpeed\":6.04,\"meteoWindDirection\":\"SW\",\"meteolunarPercent\":67.3}" ``` data emulation above ```    
Hi @Zer0F8th, you have to start from the main search, please try this: | tstats count WHERE index=* earliest=-7d BY host | append [ | inputlookup lookup.csv | eval count=0 | ... See more...
Hi @Zer0F8th, you have to start from the main search, please try this: | tstats count WHERE index=* earliest=-7d BY host | append [ | inputlookup lookup.csv | eval count=0 | fields FQDN count ] | append [ | inputlookup lookup.csv | eval count=0 | fields IP count ] | append [ | inputlookup lookup.csv | eval count=0 | fields Hostname count ] | eval host=coalesce(host, FQDN, IP, Hostname) | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe  
Hi, complete Splunk beginner here, so sorry it this is a stupid question. I'm trying to chart some data that I'm pulling from an MQTT broker. The Splunk  MQTT Modular Input app is doing its thing a... See more...
Hi, complete Splunk beginner here, so sorry it this is a stupid question. I'm trying to chart some data that I'm pulling from an MQTT broker. The Splunk  MQTT Modular Input app is doing its thing and data is arriving every 5 minutes. Using the most basic query  (  source="mqtt://MeteoMQTT"  ) gives these results:   Fri Jul 26 15:24:46 BST 2024 name=mqtt_msg_received event_id= topic=meteobridge msg={"meteoTemp":17.9,"meteoHumidity":64,"meteoRainlasthour":0,"meteoWindSpeed":6.04,"meteoWindDirection":"SW","meteolunarPercent":67.3}   What I really want to do though is to break out the values from the most recent data poll into separate "elements" that can then be added to a dashboard. I tried using the spath command: source="mqtt://MeteoMQTT" | spath output=meteoTemp path=meteoTemp But that just returned the whole object again. So, how can i parse out the different values (meteoTemp, meteoHumidity, meteoRainlasthour, etc), so that i can add their most recent values as individual dashboard elements please? TIA.
Hi All, So I have a lookup table with the following fields: FQDN, Hostname, and IP. I need to check to see which of these assets in the lookup table are logging (about 700 assets) and which aren't... See more...
Hi All, So I have a lookup table with the following fields: FQDN, Hostname, and IP. I need to check to see which of these assets in the lookup table are logging (about 700 assets) and which aren't in the last 7 days. I used the following basic SPL to get a list of hosts which are logging:   | tstats earliest(_time) latest(_time) count where index=* earliest=-7d by host   The issue I'm having is that the host output in the above SPL comes through in different formats, it may be a FQDN or a Hostname, or an IP address. How do I use my lookup table to check if the assets in the lookup table are logging without having to do 3 joins on FQDN, Hostname and IP? Here was a SPL query that somewhat worked but it is too inefficient:   | inputlookup lookup.csv | eval FQDN=lower(FQDN) | eval Hostname=lower(Hostname) | join type=left FQDN [ |tstats latest(_time) as lastTime where index=* earliest=-7d by host | rename host as FQDN | eval FQDN=lower(FQDN) | eval Days_Since_Last_Log = round((now() - lastTime) / 86400) | convert ctime(lastTime) ] | join type=left Hostname [ |tstats latest(_time) as lastTime where index=* earliest=-7d by host | rename host as Hostname | eval Hostname=lower(Hostname) | eval Days_Since_Last_Log = round((now() - lastTime) / 86400) | convert ctime(lastTime) ] | join type=left IP[ |tstats latest(_time) as lastTime where index=* earliest=-7d by host | rename host as IP | eval IP=lower(IP) | eval Days_Since_Last_Log = round((now() - lastTime) / 86400) | convert ctime(lastTime) ] | rename lastTime as LastTime | fillnull value="NULL" | table FQDN, Hostname, IP, Serial, LastTime, Days_Since_Last_Log   I'm somewhat new to Splunk so thank you for the help!
Hi @elend , if the Time in dashboardA is defined in a Time imput called e.g. "Time", so the tokens are a called $Time.earliest$ and $Time.latest$, you can pass then in the drilldown url: earliest=$... See more...
Hi @elend , if the Time in dashboardA is defined in a Time imput called e.g. "Time", so the tokens are a called $Time.earliest$ and $Time.latest$, you can pass then in the drilldown url: earliest=$Time.earliest$&amp;latest=$Time.latest$ Ciao. Giuseppe
So the premise is that I constructed two dashboards: dashboard A as an overview and dashboard B as details. Then, on dashboard A, I configured one of the displays to have an on-click trigger that con... See more...
So the premise is that I constructed two dashboards: dashboard A as an overview and dashboard B as details. Then, on dashboard A, I configured one of the displays to have an on-click trigger that connects to dashboard B. However, the global time condition on dashboard A cannot be connected to dashboard B.   is it possible to make the time dynamic on dashboard B?
Hi @ebd12 , you can use the same msi to install Splunk different times on different servers, not on the the same server. You cannot install two times on the same server. If you have a linux server... See more...
Hi @ebd12 , you can use the same msi to install Splunk different times on different servers, not on the the same server. You cannot install two times on the same server. If you have a linux server and completely remove the old installation, you can install again on the same server. If you have an error in the second installation, it could be related to a different issu. Ciao. Giuseppe
@ITWhisperer Thanks for your help and suggestion.
Wouldn't you think if I knew of another way I would have mentioned it? You can't use strings for values in charts.
Hello ,  Am I eligeable for an other 60days free trial splunk enterprise  with the same splunk email accompt  ? I tried to install splunk enterprise in an other PC ( with the same .msi  exec  used t... See more...
Hello ,  Am I eligeable for an other 60days free trial splunk enterprise  with the same splunk email accompt  ? I tried to install splunk enterprise in an other PC ( with the same .msi  exec  used the first time, then I  tried a new one ) but  the installation failed ( error  splunk enterprise wizard setup end prematurely) . My free trial  goes  until 11 August 2024 , I did'nt uninstall splunk enterprise on the first PC Im confuse, does any one can help me  
@ITWhisperer  Any alternative to showcase this in line chart ?    
My org has millions of events coming in through firewalls. I had a 24 hour timeframe search take 12.5 hours to run.  I was curious if I broke it up into 6 hour timeframes (changing the earliest/l... See more...
My org has millions of events coming in through firewalls. I had a 24 hour timeframe search take 12.5 hours to run.  I was curious if I broke it up into 6 hour timeframes (changing the earliest/latest statements accordingly), and having them outputlookup to the same lookup file. I would then inputlookup the file and tailor enrich accordingly, however I want to reset after each day. ie. I do not want the file to keep growing. Would I set append=false on query1, and append=true for query2, query3, and query4? 
| foreach Maximum Average Minimum [ eval <<FIELD>>_duration=tostring(<<FIELD>>,"duration") ] However, as I said before, you can't use these duration fields on a chart as they are strings not numbers
This looks like a perfectly reasonable event to ingest whole - you should specify that it is JSON format for extraction purposes, and you can use the json_* functions to manipulate the data in your s... See more...
This looks like a perfectly reasonable event to ingest whole - you should specify that it is JSON format for extraction purposes, and you can use the json_* functions to manipulate the data in your searches. Is it just that you need help to find the timestamp for the event or is Splunk already doing that correctly?
@ITWhisperer  index=wma_bext TYPE=FULFILLMENT_REQUEST STATUS="Marshalling" | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFIL... See more...
@ITWhisperer  index=wma_bext TYPE=FULFILLMENT_REQUEST STATUS="Marshalling" | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TIMESTAMP _time | eval lead_time = (completion_time - FULFILLMENT_START_TIMESTAMP) | timechart max(lead_time) as "Maximum" avg(lead_time) as "Average" min(lead_time) as "Minimum" | foreach Maximum Average Minimum [ eval <<FIELD>>_hours=round('<<FIELD>>'/3600, 2), <<FIELD>>_minutes=round('<<FIELD>>'/60, 2) ] When I use above query to combine hours and minutes and again , then i used to write like this "| eval max_d = Maximum_hours. ":" .Maximum_minutes" and again it comes into a string mode. Please suggest me how I can showcase my results in HH:MM mode for maximum, average, minimum. Below results currently I am getting by using above query.  
ev event after upload
here is the raw file and i have tried this by uploading for test file but not getting events in proper format when trying to ingest  all data is getting on one event not as separate inside this sep... See more...
here is the raw file and i have tried this by uploading for test file but not getting events in proper format when trying to ingest  all data is getting on one event not as separate inside this separate objects are there  do i need to modify the file structure or need to configure props at host level ? {   "date_extract_linux": "2024-07-26 08:44:23.398743330",   "database": {     "script_version": "1.0",     "global_parameters": {       "check_name": "General_parameters",       "check_status": "OK",       "check_error": "",       "script_version": "1.0",       "host_name": "flosclnrhv03.pharma.aventis.com",       "database_name": "C2N48617",       "instance_name": "C2N48617",       "database_version": "19.0.0.0.0",       "database_major_version": "19",       "database_minor_version": "0"     },     "queue_mem_check": [       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_TASKREPORTWORKTASK_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_PIWORKTASK_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_LABELWORKTASK_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_PIPROCESS_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "SYS",         "queue_name": "ALERT_QUE",         "queue_sharable_mem": "0"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "SYS",         "queue_name": "AQ$_ALERT_QT_E",         "queue_sharable_mem": "0"       }     ],     "fra_check": {       "check_name": "fra_check",       "check_status": "OK",       "check_error": "",       "flash_in_gb": "40",       "flash_used_in_gb": ".62",       "flash_reclaimable_gb": "0",       "percent_of_space_used": "1.56"     },     "processes": {       "check_name": "processes",       "check_status": "OK",       "check_error": "",       "process_percent": "27.3",       "process_current_value": "273",       "process_limit": "1000"     },     "sessions": {       "check_name": "sessions",       "check_status": "OK",       "check_error": "",       "sessions_percent": "16.41",       "sessions_current_value": "252",       "sessions_limit": "1536"     },     "cdb_tbs_check": [       {         "check_name": "cdb_tbs_check",         "check_status": "OK",         "check_error": "",         "tablespace_name": "UNDOTBS1",         "total_physical_all_mb": "65536",         "current_use_mb": "26",         "percent_used": "0"       },       {         "check_name": "cdb_tbs_check",         "check_status": "OK",         "check_error": "",         "tablespace_name": "USERS",         "total_physical_all_mb": "65536",         "current_use_mb": "4",         "percent_used": "0"       }     ],     "pdb_tbs_check": [       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN8944",         "tablespace_name": "USERS",         "total_physical_all_mb": "32767",         "current_use_mb": "1176",         "percent_used": "4"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "UNDOTBS1",         "total_physical_all_mb": "65536",         "current_use_mb": "76",         "percent_used": "0"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "TOOLS",         "total_physical_all_mb": "65536",         "current_use_mb": "5",         "percent_used": "0"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN2467",         "tablespace_name": "UNDOTBS1",         "total_physical_all_mb": "65536",         "current_use_mb": "22",         "percent_used": "0"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN2467",         "tablespace_name": "SYSAUX",         "total_physical_all_mb": "65536",         "current_use_mb": "627",         "percent_used": "1"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "SYSTEM",         "total_physical_all_mb": "65536",         "current_use_mb": "784",         "percent_used": "1"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN8944",         "tablespace_name": "SYSAUX",         "total_physical_all_mb": "65536",         "current_use_mb": "1546",         "percent_used": "2"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "SYSAUX",         "total_physical_all_mb": "65536",         "current_use_mb": "7802",         "percent_used": "12"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "USRINDEX",         "total_physical_all_mb": "65536",         "current_use_mb": "128",         "percent_used": "0"       }     ]   } }
No, that is the complete opposite of what I am saying! index= abc | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TI... See more...
No, that is the complete opposite of what I am saying! index= abc | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TIMESTAMP _time | eval lead_time = (completion_time - FULFILLMENT_START_TIMESTAMP) | timechart max(lead_time) as "Maximum" avg(lead_time) as "Average" min(lead_time) as "Minimum" Keep the values numeric differences between timestamps - if you want, you could divide the values by 60 to get minutes 
We want to monitor and poll data from REST APIs and index the responses in Splunk. We know that could be achieved by the Splunk base provided app REST API Modular Input but since it's a developer su... See more...
We want to monitor and poll data from REST APIs and index the responses in Splunk. We know that could be achieved by the Splunk base provided app REST API Modular Input but since it's a developer supported paid service application, we wanted to know is there any other alternative way to monitor the same in Splunk enterprise. A quick response is highly appreciated.