All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

ev event after upload
here is the raw file and i have tried this by uploading for test file but not getting events in proper format when trying to ingest  all data is getting on one event not as separate inside this sep... See more...
here is the raw file and i have tried this by uploading for test file but not getting events in proper format when trying to ingest  all data is getting on one event not as separate inside this separate objects are there  do i need to modify the file structure or need to configure props at host level ? {   "date_extract_linux": "2024-07-26 08:44:23.398743330",   "database": {     "script_version": "1.0",     "global_parameters": {       "check_name": "General_parameters",       "check_status": "OK",       "check_error": "",       "script_version": "1.0",       "host_name": "flosclnrhv03.pharma.aventis.com",       "database_name": "C2N48617",       "instance_name": "C2N48617",       "database_version": "19.0.0.0.0",       "database_major_version": "19",       "database_minor_version": "0"     },     "queue_mem_check": [       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_TASKREPORTWORKTASK_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_PIWORKTASK_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_LABELWORKTASK_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "LIVE2459_VAL",         "queue_name": "AQ$_Q_PIPROCESS_TAB_E",         "queue_sharable_mem": "4072"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "SYS",         "queue_name": "ALERT_QUE",         "queue_sharable_mem": "0"       },       {         "check_name": "queue_mem_check",         "check_status": "OK",         "check_error": "",         "queue_owner": "SYS",         "queue_name": "AQ$_ALERT_QT_E",         "queue_sharable_mem": "0"       }     ],     "fra_check": {       "check_name": "fra_check",       "check_status": "OK",       "check_error": "",       "flash_in_gb": "40",       "flash_used_in_gb": ".62",       "flash_reclaimable_gb": "0",       "percent_of_space_used": "1.56"     },     "processes": {       "check_name": "processes",       "check_status": "OK",       "check_error": "",       "process_percent": "27.3",       "process_current_value": "273",       "process_limit": "1000"     },     "sessions": {       "check_name": "sessions",       "check_status": "OK",       "check_error": "",       "sessions_percent": "16.41",       "sessions_current_value": "252",       "sessions_limit": "1536"     },     "cdb_tbs_check": [       {         "check_name": "cdb_tbs_check",         "check_status": "OK",         "check_error": "",         "tablespace_name": "UNDOTBS1",         "total_physical_all_mb": "65536",         "current_use_mb": "26",         "percent_used": "0"       },       {         "check_name": "cdb_tbs_check",         "check_status": "OK",         "check_error": "",         "tablespace_name": "USERS",         "total_physical_all_mb": "65536",         "current_use_mb": "4",         "percent_used": "0"       }     ],     "pdb_tbs_check": [       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN8944",         "tablespace_name": "USERS",         "total_physical_all_mb": "32767",         "current_use_mb": "1176",         "percent_used": "4"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "UNDOTBS1",         "total_physical_all_mb": "65536",         "current_use_mb": "76",         "percent_used": "0"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "TOOLS",         "total_physical_all_mb": "65536",         "current_use_mb": "5",         "percent_used": "0"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN2467",         "tablespace_name": "UNDOTBS1",         "total_physical_all_mb": "65536",         "current_use_mb": "22",         "percent_used": "0"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN2467",         "tablespace_name": "SYSAUX",         "total_physical_all_mb": "65536",         "current_use_mb": "627",         "percent_used": "1"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "SYSTEM",         "total_physical_all_mb": "65536",         "current_use_mb": "784",         "percent_used": "1"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1NN8944",         "tablespace_name": "SYSAUX",         "total_physical_all_mb": "65536",         "current_use_mb": "1546",         "percent_used": "2"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "SYSAUX",         "total_physical_all_mb": "65536",         "current_use_mb": "7802",         "percent_used": "12"       },       {         "check_name": "pdb_tbs_check",         "check_status": "OK",         "check_error": "",         "pdb_name": "O1S48633",         "tablespace_name": "USRINDEX",         "total_physical_all_mb": "65536",         "current_use_mb": "128",         "percent_used": "0"       }     ]   } }
No, that is the complete opposite of what I am saying! index= abc | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TI... See more...
No, that is the complete opposite of what I am saying! index= abc | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TIMESTAMP _time | eval lead_time = (completion_time - FULFILLMENT_START_TIMESTAMP) | timechart max(lead_time) as "Maximum" avg(lead_time) as "Average" min(lead_time) as "Minimum" Keep the values numeric differences between timestamps - if you want, you could divide the values by 60 to get minutes 
We want to monitor and poll data from REST APIs and index the responses in Splunk. We know that could be achieved by the Splunk base provided app REST API Modular Input but since it's a developer su... See more...
We want to monitor and poll data from REST APIs and index the responses in Splunk. We know that could be achieved by the Splunk base provided app REST API Modular Input but since it's a developer supported paid service application, we wanted to know is there any other alternative way to monitor the same in Splunk enterprise. A quick response is highly appreciated. 
@ITWhispererDo you mean like this ? | eval max_HH_MM = tostring(max_HH_MM) | eval avg_HH_MM = tostring(avg_HH_MM) | eval min_HH_MM = tostring(min_HH_MM)
The values need to be numeric e.g. number of minutes, you can't use string values such as HH:MM to display on a chart
I know that these errors are unrelated. I tried to show that internal log are not full of "error" messages. Situation is Thruput is not limited (thruput se to 10240) Number of logs is low logs i... See more...
I know that these errors are unrelated. I tried to show that internal log are not full of "error" messages. Situation is Thruput is not limited (thruput se to 10240) Number of logs is low logs in files are generated fluently, i checked by "tail -f" during aprox 20 minutes after UF restart there is no problem after this time  problem  appears the problem is Data are buffered somewhere in front of indexer server, it is buffered aprox 9 minutes. After I restarted UF or droped TCP session, data were suddenly sent to the indexer. I belive It must be buffered on UF side. I saw no dat period and then data burst on indexer site. Shape of the grahp is saying the same thing. Data are somehere for some period of time and then are flushed to indexer. Older data are bigger diff a newer data are lower diff. Index time   SendQ TCPout   Queues internal messages (clustered)
I am working on the below query in which I want to calculate the lead_time in HH:SS. This query is giving me some results in statical mode but not giving any results with linechart. Please help me fi... See more...
I am working on the below query in which I want to calculate the lead_time in HH:SS. This query is giving me some results in statical mode but not giving any results with linechart. Please help me fix it. Results with statical mode. No results showing while using "line chart" Below is the complete query: index= abc | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TIMESTAMP _time | eval lead_time = (completion_time - FULFILLMENT_START_TIMESTAMP) | eval hours=floor(lead_time / 3600) | eval minutes=floor((lead_time % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval HH_MM = hours . ":" . formatted_minutes | timechart max(HH_MM) as "Maximum" avg(HH_MM) as "Average" min(HH_MM) as "Minimum" | eval hours=floor(Maximum / 3600) | eval minutes=floor((Maximum % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval max_HH_MM = hours . ":" . formatted_minutes | eval hours=floor(Average / 3600) | eval minutes=floor((Average % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval avg_HH_MM = hours . ":" . formatted_minutes | eval hours=floor(Minimum / 3600) | eval minutes=floor((Minimum % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval min_HH_MM = hours . ":" . formatted_minutes | table _time max_HH_MM avg_HH_MM min_HH_MM
Helped a lot, thanks.
thanks, I used a correlation key to correlate the events and it worked. 
Hi,  Has anyone used the  "ServiceNow Security Operations Event Ingestion Addon for Splunk ES" or the "ServiceNow Security Operations Addon" app to configure OAuth2 ? If yes, how do you set the use... See more...
Hi,  Has anyone used the  "ServiceNow Security Operations Event Ingestion Addon for Splunk ES" or the "ServiceNow Security Operations Addon" app to configure OAuth2 ? If yes, how do you set the user in the "created by" field in ServiceNow? It seems to be automatically set to the user who configured the OAuth2 connection. With basic auth it is simple because you decide which user connects to ServiceNow, but with OAuth2 it is just a clientID and secret but there is no user field and it seems a user is being sent alongside the event by Splunk.
Apart from the fact that this is not quite valid JSON, what have you tried? what are you getting? what are you expecting?
Thanks, I figured it out using the stanzas. Don't know if this is the "sanctioned" way but if anyone else are interested, what solved it for me was adding host to each. Without it, it wouldn't work.... See more...
Thanks, I figured it out using the stanzas. Don't know if this is the "sanctioned" way but if anyone else are interested, what solved it for me was adding host to each. Without it, it wouldn't work. So changing this format [tcp://1.2.3.4:123] connection_host = ip index = index1 sourcetype = access_combined To this:     [tcp://1.2.3.4:123] connection_host = ip host = 1.2.3.4 index = index2 sourcetype = access_combined [tcp://5.6.7.8:123] connection_host = ip host = 5.6.7.8 index = index2 sourcetype = access_combined      
@yuanliuPlease find below an example when logs are generated in French, which causes issues during field extraction. This is why I converted them to XML to see if it could resolve the language proble... See more...
@yuanliuPlease find below an example when logs are generated in French, which causes issues during field extraction. This is why I converted them to XML to see if it could resolve the language problem. Do you have any other solutions to this issue, please? 04/29/2014 02:50:23 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4672 EventType=0 Type=Information ComputerName=sacreblue TaskCategory=Ouverture de session spéciale OpCode=Informations RecordNumber=2746 Keywords=Succès de l'audit Message=Privilèges spéciaux attribués à la nouvelle ouverture de session. Sujet : ID de sécurité : AUTORITE NT\Système Nom du compte : Système Domaine du compte : AUTORITE NT ID d'ouverture de session : 0x3e7 Privilèges : SeAssignPrimaryTokenPrivilege SeTcbPrivilege SeSecurityPrivilege SeTakeOwnershipPrivilege SeLoadDriverPrivilege SeBackupPrivilege SeRestorePrivilege SeDebugPrivilege SeAuditPrivilege SeSystemEnvironmentPrivilege SeImpersonatePrivilege  
Hi @Silah , ok, you can use syslog using different stanzas as you did, if the second one doesn't run, check if the firewall routes are open, you can check this using telnet on the source systems. ... See more...
Hi @Silah , ok, you can use syslog using different stanzas as you did, if the second one doesn't run, check if the firewall routes are open, you can check this using telnet on the source systems. In addition I hint to use an rsyslog or a syslog-ng server to take syslog events, instead Splunk TCP inputs, writing them on files and then reading those files with the HF; in this way you can continue to receive logs even if Splunk is down or in maintenance and you'll have less issue for the load of the Splunk Server. Ciao. Giuseppe
Thanks Guiseppe The Why: I do need different access grants for one, and I have limitations I am trying to overcome. My heavy forwarders are behind firewall and I have a directive to reduce as far po... See more...
Thanks Guiseppe The Why: I do need different access grants for one, and I have limitations I am trying to overcome. My heavy forwarders are behind firewall and I have a directive to reduce as far possible the amount of ports open, and ideally I want as little software footprint as possible (so no  splunk agents installed on the app servers) so I am trying to use existing syslog forwarder. The TCP forwarding is working fine for the POC but I need to scale it. Forgive my ignorance reg stanzas, but is that not I tried to do? adding the second [tcp://5.6.7.8:123] ? This didn't work
i have json data but all the data getting in single event not parsing properly each event  here is adding the event data. Please help what should i do to achieve in standard format in splunk this i... See more...
i have json data but all the data getting in single event not parsing properly each event  here is adding the event data. Please help what should i do to achieve in standard format in splunk this is in splunk cloud {"date_extract_linux":"2024-07-26 08:44:23.398743330","database": {"script_version":"1.0","global_parameters": {"check_name":"General_parameters","check_status":"OK","check_error":"","script_version":"1.0","host_name":"flosclnrhv03.pharma.aventis.com","database_name":"C2N48617","instance_name":"C2N48617","database_version":"19.0.0.0.0","database_major_version":"19","database_minor_version":"0"}, "queue_mem_check": {"check_name":"queue_mem_check","check_status":"OK","check_error":"","queue_owner":"LIVE2459_VAL","queue_name":"AQ$_Q_TASKREPORTWORKTASK_TAB_E","queue_sharable_mem":"4072"}, "queue_mem_check":  {"check_name":"queue_mem_check","check_status":"OK","check_error":"","queue_owner":"SYS","queue_name":"AQ$_ALERT_QT_E","queue_sharable_mem":"0"}, "fra_check": {"check_name":"fra_check","check_status":"OK","check_error":"","flash_in_gb":"40","flash_used_in_gb":".62","flash_reclaimable_gb":"0","percent_of_space_used":"1.56"}, "processes": {"check_name":"processes","check_status":"OK","check_error":"","process_percent":"27.3","process_current_value":"273","process_limit":"1000"}, "sessions": {"check_name":"sessions","check_status":"OK","check_error":"","sessions_percent":"16.41","sessions_current_value":"252","sessions_limit":"1536"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"1355","percent_used":"2"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"23606","percent_used":"36"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"UNDOTBS1","total_physical_all_mb":"65536","current_use_mb":"26","percent_used":"0"}, "cdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN2467","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"627","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"784","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"1546","percent_used":"2"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"USERS","total_physical_all_mb":"65536","current_use_mb":"1149","percent_used":"2"}, "pdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"705","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"INDX","total_physical_all_mb":"32767","current_use_mb":"378","percent_used":"1"}, "pdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"USRINDEX","total_physical_all_mb":"65536","current_use_mb":"128","percent_used":"0"}, } } Collapse
Hi @rangarbus , you should try to run these three searches in nested mode starting from the third: <third_search> [ search <second_search> [ search <first_search> | ... See more...
Hi @rangarbus , you should try to run these three searches in nested mode starting from the third: <third_search> [ search <second_search> [ search <first_search> | fields eventId ] | fields traceId ] | table fileName if eventId must be searched as raw text because it isn't in a field called eventId, you could use this one: <third_search> [ search <second_search> [ search <first_search> | rename eventId AS query | fields query ] | fields traceId ] | table fileName I hope that this nested search will run on not so many events because it will not be so performant; if you'll have many events, you shuld accelerate each search in a summary index or in a Data Model. Ciao. Giuseppe
Thank you for the clarification
I need help with assigning permissions in Splunk. 1. There is an user who needs to edit their dashboards and alerts in Splunk. This user has two applications dashboards and alerts that they need acc... See more...
I need help with assigning permissions in Splunk. 1. There is an user who needs to edit their dashboards and alerts in Splunk. This user has two applications dashboards and alerts that they need access to. I want to ensure that the user has the minimum permissions necessary to edit only those two dashboards and alerts. 2.  A user in our system has created an alert and wants to integrate it with ServiceNow. However, when attempting to select an account name in the integration settings, the user is unable to select an account name. So what all minimum permission is required for user.