All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ebd12 , you can use the same msi to install Splunk different times on different servers, not on the the same server. You cannot install two times on the same server. If you have a linux server... See more...
Hi @ebd12 , you can use the same msi to install Splunk different times on different servers, not on the the same server. You cannot install two times on the same server. If you have a linux server and completely remove the old installation, you can install again on the same server. If you have an error in the second installation, it could be related to a different issu. Ciao. Giuseppe
@ITWhisperer Thanks for your help and suggestion.
Wouldn't you think if I knew of another way I would have mentioned it? You can't use strings for values in charts.
Hello ,  Am I eligeable for an other 60days free trial splunk enterprise  with the same splunk email accompt  ? I tried to install splunk enterprise in an other PC ( with the same .msi  exec  used t... See more...
Hello ,  Am I eligeable for an other 60days free trial splunk enterprise  with the same splunk email accompt  ? I tried to install splunk enterprise in an other PC ( with the same .msi  exec  used the first time, then I  tried a new one ) but  the installation failed ( error  splunk enterprise wizard setup end prematurely) . My free trial  goes  until 11 August 2024 , I did'nt uninstall splunk enterprise on the first PC Im confuse, does any one can help me  
@ITWhisperer  Any alternative to showcase this in line chart ?    
My org has millions of events coming in through firewalls. I had a 24 hour timeframe search take 12.5 hours to run.  I was curious if I broke it up into 6 hour timeframes (changing the earliest/l... See more...
My org has millions of events coming in through firewalls. I had a 24 hour timeframe search take 12.5 hours to run.  I was curious if I broke it up into 6 hour timeframes (changing the earliest/latest statements accordingly), and having them outputlookup to the same lookup file. I would then inputlookup the file and tailor enrich accordingly, however I want to reset after each day. ie. I do not want the file to keep growing. Would I set append=false on query1, and append=true for query2, query3, and query4? 
| foreach Maximum Average Minimum [ eval <<FIELD>>_duration=tostring(<<FIELD>>,"duration") ] However, as I said before, you can't use these duration fields on a chart as they are strings not numbers
This looks like a perfectly reasonable event to ingest whole - you should specify that it is JSON format for extraction purposes, and you can use the json_* functions to manipulate the data in your s... See more...
This looks like a perfectly reasonable event to ingest whole - you should specify that it is JSON format for extraction purposes, and you can use the json_* functions to manipulate the data in your searches. Is it just that you need help to find the timestamp for the event or is Splunk already doing that correctly?
@ITWhisperer  index=wma_bext TYPE=FULFILLMENT_REQUEST STATUS="Marshalling" | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFIL... See more...
@ITWhisperer  index=wma_bext TYPE=FULFILLMENT_REQUEST STATUS="Marshalling" | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TIMESTAMP _time | eval lead_time = (completion_time - FULFILLMENT_START_TIMESTAMP) | timechart max(lead_time) as "Maximum" avg(lead_time) as "Average" min(lead_time) as "Minimum" | foreach Maximum Average Minimum [ eval <<FIELD>>_hours=round('<<FIELD>>'/3600, 2), <<FIELD>>_minutes=round('<<FIELD>>'/60, 2) ] When I use above query to combine hours and minutes and again , then i used to write like this "| eval max_d = Maximum_hours. ":" .Maximum_minutes" and again it comes into a string mode. Please suggest me how I can showcase my results in HH:MM mode for maximum, average, minimum. Below results currently I am getting by using above query.  
ev event after upload
here is the raw file and i have tried this by uploading for test file but not getting events in proper format when trying to ingest  all data is getting on one event not as separate inside this sep... See more...
here is the raw file and i have tried this by uploading for test file but not getting events in proper format when trying to ingest  all data is getting on one event not as separate inside this separate objects are there  do i need to modify the file structure or need to configure props at host level ? {   "date_extract_linux": "2024-07-26 08:44:23.398743330",   "database": {     "script_version": "1.0",     "global_parameters": {       "check_name": "General_parameters",       "check_status": "OK",       "check_error": "",       "script_version": "1.0",       "host_name": "flosclnrhv03.pharma.aventis.com",       "database_name": "C2N48617",       "instance_name": "C2N48617",       "database_version": "19.0.0.0.0",       "database_major_version": "19",       "database_minor_version": "0"     },     "queue_mem_check": [       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_TASKREPORTWORKTASK_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_PIWORKTASK_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_LABELWORKTASK_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_PIPROCESS_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "SYS",         "queue_name": "ALERT_QUE",         "queue_sharable_mem": "0"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "SYS",         "queue_name": "AQ$_ALERT_QT_E",         "queue_sharable_mem": "0"       }     ],     "fra_check": {       "check_name": "fra_check",       "check_status": "OK",       "check_error": "",       "flash_in_gb": "40",       "flash_used_in_gb": ".62",       "flash_reclaimable_gb": "0",       "percent_of_space_used": "1.56"     },     "processes": {       "check_name": "processes",       "check_status": "OK",       "check_error": "",       "process_percent": "27.3",       "process_current_value": "273",       "process_limit": "1000"     },     "sessions": {       "check_name": "sessions",       "check_status": "OK",       "check_error": "",       "sessions_percent": "16.41",       "sessions_current_value": "252",       "sessions_limit": "1536"     },     "cdb_tbs_check": [       {         "check_name": "cdb_tbs_check",         "check_status": "OK",         "check_error": "",         "tablespace_name": "UNDOTBS1",         "total_physical_all_mb": "65536",         "current_use_mb": "26",         "percent_used": "0"       },       {         "check_name": "cdb_tbs_check",         "check_status": "OK",         "check_error": "",         "tablespace_name": "USERS",         "total_physical_all_mb": "65536",         "current_use_mb": "4",         "percent_used": "0"       }     ],     "pdb_tbs_check": [       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN8944",         "tablespace_name": "USERS",         "total_physical_all_mb": "32767",         "current_use_mb": "1176",         "percent_used": "4"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "UNDOTBS1",         "total_physical_all_mb": "65536",         "current_use_mb": "76",         "percent_used": "0"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "TOOLS",         "total_physical_all_mb": "65536",         "current_use_mb": "5",         "percent_used": "0"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN2467",         "tablespace_name": "UNDOTBS1",         "total_physical_all_mb": "65536",         "current_use_mb": "22",         "percent_used": "0"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN2467",         "tablespace_name": "SYSAUX",         "total_physical_all_mb": "65536",         "current_use_mb": "627",         "percent_used": "1"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "SYSTEM",         "total_physical_all_mb": "65536",         "current_use_mb": "784",         "percent_used": "1"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN8944",         "tablespace_name": "SYSAUX",         "total_physical_all_mb": "65536",         "current_use_mb": "1546",         "percent_used": "2"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "SYSAUX",         "total_physical_all_mb": "65536",         "current_use_mb": "7802",         "percent_used": "12"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "USRINDEX",         "total_physical_all_mb": "65536",         "current_use_mb": "128",         "percent_used": "0"       }     ]   } }
No, that is the complete opposite of what I am saying! index= abc | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TI... See more...
No, that is the complete opposite of what I am saying! index= abc | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TIMESTAMP _time | eval lead_time = (completion_time - FULFILLMENT_START_TIMESTAMP) | timechart max(lead_time) as "Maximum" avg(lead_time) as "Average" min(lead_time) as "Minimum" Keep the values numeric differences between timestamps - if you want, you could divide the values by 60 to get minutes 
We want to monitor and poll data from REST APIs and index the responses in Splunk. We know that could be achieved by the Splunk base provided app REST API Modular Input but since it's a developer su... See more...
We want to monitor and poll data from REST APIs and index the responses in Splunk. We know that could be achieved by the Splunk base provided app REST API Modular Input but since it's a developer supported paid service application, we wanted to know is there any other alternative way to monitor the same in Splunk enterprise. A quick response is highly appreciated. 
@ITWhispererDo you mean like this ? | eval max_HH_MM = tostring(max_HH_MM) | eval avg_HH_MM = tostring(avg_HH_MM) | eval min_HH_MM = tostring(min_HH_MM)
The values need to be numeric e.g. number of minutes, you can't use string values such as HH:MM to display on a chart
I know that these errors are unrelated. I tried to show that internal log are not full of "error" messages. Situation is Thruput is not limited (thruput se to 10240) Number of logs is low logs i... See more...
I know that these errors are unrelated. I tried to show that internal log are not full of "error" messages. Situation is Thruput is not limited (thruput se to 10240) Number of logs is low logs in files are generated fluently, i checked by "tail -f" during aprox 20 minutes after UF restart there is no problem after this time  problem  appears the problem is Data are buffered somewhere in front of indexer server, it is buffered aprox 9 minutes. After I restarted UF or droped TCP session, data were suddenly sent to the indexer. I belive It must be buffered on UF side. I saw no dat period and then data burst on indexer site. Shape of the grahp is saying the same thing. Data are somehere for some period of time and then are flushed to indexer. Older data are bigger diff a newer data are lower diff. Index time   SendQ TCPout   Queues internal messages (clustered)
I am working on the below query in which I want to calculate the lead_time in HH:SS. This query is giving me some results in statical mode but not giving any results with linechart. Please help me fi... See more...
I am working on the below query in which I want to calculate the lead_time in HH:SS. This query is giving me some results in statical mode but not giving any results with linechart. Please help me fix it. Results with statical mode. No results showing while using "line chart" Below is the complete query: index= abc | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TIMESTAMP _time | eval lead_time = (completion_time - FULFILLMENT_START_TIMESTAMP) | eval hours=floor(lead_time / 3600) | eval minutes=floor((lead_time % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval HH_MM = hours . ":" . formatted_minutes | timechart max(HH_MM) as "Maximum" avg(HH_MM) as "Average" min(HH_MM) as "Minimum" | eval hours=floor(Maximum / 3600) | eval minutes=floor((Maximum % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval max_HH_MM = hours . ":" . formatted_minutes | eval hours=floor(Average / 3600) | eval minutes=floor((Average % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval avg_HH_MM = hours . ":" . formatted_minutes | eval hours=floor(Minimum / 3600) | eval minutes=floor((Minimum % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval min_HH_MM = hours . ":" . formatted_minutes | table _time max_HH_MM avg_HH_MM min_HH_MM
Helped a lot, thanks.
thanks, I used a correlation key to correlate the events and it worked. 
Hi,  Has anyone used the  "ServiceNow Security Operations Event Ingestion Addon for Splunk ES" or the "ServiceNow Security Operations Addon" app to configure OAuth2 ? If yes, how do you set the use... See more...
Hi,  Has anyone used the  "ServiceNow Security Operations Event Ingestion Addon for Splunk ES" or the "ServiceNow Security Operations Addon" app to configure OAuth2 ? If yes, how do you set the user in the "created by" field in ServiceNow? It seems to be automatically set to the user who configured the OAuth2 connection. With basic auth it is simple because you decide which user connects to ServiceNow, but with OAuth2 it is just a clientID and secret but there is no user field and it seems a user is being sent alongside the event by Splunk.