All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I know that these errors are unrelated. I tried to show that internal log are not full of "error" messages. Situation is Thruput is not limited (thruput se to 10240) Number of logs is low logs i... See more...
I know that these errors are unrelated. I tried to show that internal log are not full of "error" messages. Situation is Thruput is not limited (thruput se to 10240) Number of logs is low logs in files are generated fluently, i checked by "tail -f" during aprox 20 minutes after UF restart there is no problem after this time  problem  appears the problem is Data are buffered somewhere in front of indexer server, it is buffered aprox 9 minutes. After I restarted UF or droped TCP session, data were suddenly sent to the indexer. I belive It must be buffered on UF side. I saw no dat period and then data burst on indexer site. Shape of the grahp is saying the same thing. Data are somehere for some period of time and then are flushed to indexer. Older data are bigger diff a newer data are lower diff. Index time   SendQ TCPout   Queues internal messages (clustered)
I am working on the below query in which I want to calculate the lead_time in HH:SS. This query is giving me some results in statical mode but not giving any results with linechart. Please help me fi... See more...
I am working on the below query in which I want to calculate the lead_time in HH:SS. This query is giving me some results in statical mode but not giving any results with linechart. Please help me fix it. Results with statical mode. No results showing while using "line chart" Below is the complete query: index= abc | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TIMESTAMP _time | eval lead_time = (completion_time - FULFILLMENT_START_TIMESTAMP) | eval hours=floor(lead_time / 3600) | eval minutes=floor((lead_time % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval HH_MM = hours . ":" . formatted_minutes | timechart max(HH_MM) as "Maximum" avg(HH_MM) as "Average" min(HH_MM) as "Minimum" | eval hours=floor(Maximum / 3600) | eval minutes=floor((Maximum % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval max_HH_MM = hours . ":" . formatted_minutes | eval hours=floor(Average / 3600) | eval minutes=floor((Average % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval avg_HH_MM = hours . ":" . formatted_minutes | eval hours=floor(Minimum / 3600) | eval minutes=floor((Minimum % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval min_HH_MM = hours . ":" . formatted_minutes | table _time max_HH_MM avg_HH_MM min_HH_MM
Helped a lot, thanks.
thanks, I used a correlation key to correlate the events and it worked. 
Hi,  Has anyone used the  "ServiceNow Security Operations Event Ingestion Addon for Splunk ES" or the "ServiceNow Security Operations Addon" app to configure OAuth2 ? If yes, how do you set the use... See more...
Hi,  Has anyone used the  "ServiceNow Security Operations Event Ingestion Addon for Splunk ES" or the "ServiceNow Security Operations Addon" app to configure OAuth2 ? If yes, how do you set the user in the "created by" field in ServiceNow? It seems to be automatically set to the user who configured the OAuth2 connection. With basic auth it is simple because you decide which user connects to ServiceNow, but with OAuth2 it is just a clientID and secret but there is no user field and it seems a user is being sent alongside the event by Splunk.
Apart from the fact that this is not quite valid JSON, what have you tried? what are you getting? what are you expecting?
Thanks, I figured it out using the stanzas. Don't know if this is the "sanctioned" way but if anyone else are interested, what solved it for me was adding host to each. Without it, it wouldn't work.... See more...
Thanks, I figured it out using the stanzas. Don't know if this is the "sanctioned" way but if anyone else are interested, what solved it for me was adding host to each. Without it, it wouldn't work. So changing this format [tcp://1.2.3.4:123] connection_host = ip index = index1 sourcetype = access_combined To this:     [tcp://1.2.3.4:123] connection_host = ip host = 1.2.3.4 index = index2 sourcetype = access_combined [tcp://5.6.7.8:123] connection_host = ip host = 5.6.7.8 index = index2 sourcetype = access_combined      
@yuanliuPlease find below an example when logs are generated in French, which causes issues during field extraction. This is why I converted them to XML to see if it could resolve the language proble... See more...
@yuanliuPlease find below an example when logs are generated in French, which causes issues during field extraction. This is why I converted them to XML to see if it could resolve the language problem. Do you have any other solutions to this issue, please? 04/29/2014 02:50:23 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4672 EventType=0 Type=Information ComputerName=sacreblue TaskCategory=Ouverture de session spéciale OpCode=Informations RecordNumber=2746 Keywords=Succès de l'audit Message=Privilèges spéciaux attribués à la nouvelle ouverture de session. Sujet : ID de sécurité : AUTORITE NT\Système Nom du compte : Système Domaine du compte : AUTORITE NT ID d'ouverture de session : 0x3e7 Privilèges : SeAssignPrimaryTokenPrivilege SeTcbPrivilege SeSecurityPrivilege SeTakeOwnershipPrivilege SeLoadDriverPrivilege SeBackupPrivilege SeRestorePrivilege SeDebugPrivilege SeAuditPrivilege SeSystemEnvironmentPrivilege SeImpersonatePrivilege  
Hi @Silah , ok, you can use syslog using different stanzas as you did, if the second one doesn't run, check if the firewall routes are open, you can check this using telnet on the source systems. ... See more...
Hi @Silah , ok, you can use syslog using different stanzas as you did, if the second one doesn't run, check if the firewall routes are open, you can check this using telnet on the source systems. In addition I hint to use an rsyslog or a syslog-ng server to take syslog events, instead Splunk TCP inputs, writing them on files and then reading those files with the HF; in this way you can continue to receive logs even if Splunk is down or in maintenance and you'll have less issue for the load of the Splunk Server. Ciao. Giuseppe
Thanks Guiseppe The Why: I do need different access grants for one, and I have limitations I am trying to overcome. My heavy forwarders are behind firewall and I have a directive to reduce as far po... See more...
Thanks Guiseppe The Why: I do need different access grants for one, and I have limitations I am trying to overcome. My heavy forwarders are behind firewall and I have a directive to reduce as far possible the amount of ports open, and ideally I want as little software footprint as possible (so no  splunk agents installed on the app servers) so I am trying to use existing syslog forwarder. The TCP forwarding is working fine for the POC but I need to scale it. Forgive my ignorance reg stanzas, but is that not I tried to do? adding the second [tcp://5.6.7.8:123] ? This didn't work
i have json data but all the data getting in single event not parsing properly each event  here is adding the event data. Please help what should i do to achieve in standard format in splunk this i... See more...
i have json data but all the data getting in single event not parsing properly each event  here is adding the event data. Please help what should i do to achieve in standard format in splunk this is in splunk cloud {"date_extract_linux":"2024-07-26 08:44:23.398743330","database": {"script_version":"1.0","global_parameters": {"check_name":"General_parameters","check_status":"OK","check_error":"","script_version":"1.0","host_name":"flosclnrhv03.pharma.aventis.com","database_name":"C2N48617","instance_name":"C2N48617","database_version":"19.0.0.0.0","database_major_version":"19","database_minor_version":"0"}, "queue_mem_check": {"check_name":"queue_mem_check","check_status":"OK","check_error":"","queue_owner":"LIVE2459_VAL","queue_name":"AQ$_Q_TASKREPORTWORKTASK_TAB_E","queue_sharable_mem":"4072"}, "queue_mem_check":  {"check_name":"queue_mem_check","check_status":"OK","check_error":"","queue_owner":"SYS","queue_name":"AQ$_ALERT_QT_E","queue_sharable_mem":"0"}, "fra_check": {"check_name":"fra_check","check_status":"OK","check_error":"","flash_in_gb":"40","flash_used_in_gb":".62","flash_reclaimable_gb":"0","percent_of_space_used":"1.56"}, "processes": {"check_name":"processes","check_status":"OK","check_error":"","process_percent":"27.3","process_current_value":"273","process_limit":"1000"}, "sessions": {"check_name":"sessions","check_status":"OK","check_error":"","sessions_percent":"16.41","sessions_current_value":"252","sessions_limit":"1536"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"1355","percent_used":"2"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"23606","percent_used":"36"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"UNDOTBS1","total_physical_all_mb":"65536","current_use_mb":"26","percent_used":"0"}, "cdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN2467","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"627","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"784","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"1546","percent_used":"2"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"USERS","total_physical_all_mb":"65536","current_use_mb":"1149","percent_used":"2"}, "pdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"705","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"INDX","total_physical_all_mb":"32767","current_use_mb":"378","percent_used":"1"}, "pdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"USRINDEX","total_physical_all_mb":"65536","current_use_mb":"128","percent_used":"0"}, } } Collapse
Hi @rangarbus , you should try to run these three searches in nested mode starting from the third: <third_search> [ search <second_search> [ search <first_search> | ... See more...
Hi @rangarbus , you should try to run these three searches in nested mode starting from the third: <third_search> [ search <second_search> [ search <first_search> | fields eventId ] | fields traceId ] | table fileName if eventId must be searched as raw text because it isn't in a field called eventId, you could use this one: <third_search> [ search <second_search> [ search <first_search> | rename eventId AS query | fields query ] | fields traceId ] | table fileName I hope that this nested search will run on not so many events because it will not be so performant; if you'll have many events, you shuld accelerate each search in a summary index or in a Data Model. Ciao. Giuseppe
Thank you for the clarification
I need help with assigning permissions in Splunk. 1. There is an user who needs to edit their dashboards and alerts in Splunk. This user has two applications dashboards and alerts that they need acc... See more...
I need help with assigning permissions in Splunk. 1. There is an user who needs to edit their dashboards and alerts in Splunk. This user has two applications dashboards and alerts that they need access to. I want to ensure that the user has the minimum permissions necessary to edit only those two dashboards and alerts. 2.  A user in our system has created an alert and wants to integrate it with ServiceNow. However, when attempting to select an account name in the integration settings, the user is unable to select an account name. So what all minimum permission is required for user. 
Hi @splunkreal , I'm sorry but it isn't possible. It's possible to override index value before indexing only on not coocked events (not passed throgh an HF or IDX) using the method descibed at http... See more...
Hi @splunkreal , I'm sorry but it isn't possible. It's possible to override index value before indexing only on not coocked events (not passed throgh an HF or IDX) using the method descibed at https://docs.splunk.com/Documentation/Splunk/9.2.2/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input Ciao. Giuseppe
@nabeel652  You don't really need the tokens, just add the selectFirstChoice option and make sure last week is sorted first and it will all work, see this dashboard example <form version="1.1" theme... See more...
@nabeel652  You don't really need the tokens, just add the selectFirstChoice option and make sure last week is sorted first and it will all work, see this dashboard example <form version="1.1" theme="light"> <label>LastWeek</label> <fieldset submitButton="false"> <input type="dropdown" token="week"> <label>week</label> <fieldForLabel>time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <selectFirstChoice>1</selectFirstChoice> <search> <query>| makeresults count=52 | fields - _time | streamstats count | eval count=count-1 | eval start_time = relative_time(now(),"-".count."w@w+1d") | eval time = case(count==1, "Last week", count==0, "Current week", 1==1, strftime(start_time,"%a %d-%b-%Y")) | eval order=if(count=1, -1, count) | sort order | table time, start_time | eval start_time=round(start_time,0) </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <change> <set token="week_name">$label$</set> </change> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults | fields - _time | eval selection=$week|s$, name=$week_name|s$ | eval Value=strftime(selection, "%F %T")</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form> @ 
Hi @sintjm , as also @yuanliu said, you need a correlation key to correlate the events, if you have, you can use it in a stats command and this is the best solution: <your_search> | stats valu... See more...
Hi @sintjm , as also @yuanliu said, you need a correlation key to correlate the events, if you have, you can use it in a stats command and this is the best solution: <your_search> | stats values(Resp_time) AS Resp_time values(Req_time) AS Req_time BY key | eval diff=Resp_time-Req_time If you haven't and you're sure that events are always sequential, you could use the transaction command: <your_search> | transaction maxevents=2 | table duration Ciao. Giuseppe  
Hi @nabeel652, You should use valid values from the dropdown contents as default and initial settings.  Changing last_week token init to formatted value will help you, please try below; <fieldset ... See more...
Hi @nabeel652, You should use valid values from the dropdown contents as default and initial settings.  Changing last_week token init to formatted value will help you, please try below; <fieldset submitButton="false"> <input type="dropdown" token="week"> <label>week</label> <fieldForLabel>time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <search> <query>| makeresults count=52 | fields - _time | streamstats count | eval count=count-1 | eval start_time = relative_time(now(),"-".count."w@w+1d") | eval time = case(count==1, "Last week", count==0, "Current week", 1==1, strftime(start_time,"%a %d-%b-%Y")) | table time, start_time | eval start_time=round(start_time,0)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>$last_week$</default> <initialValue>$last_week$</initialValue> </input> </fieldset> -- The token initialisation that calculates last week wrt now(): <init> <eval token="last_week">strftime(relative_time(now(),"-1w@w+1d"),"%a %d-%b-%Y")</eval> </init>  
These errors are completely unrelated. You'd need to dig deeper to find something relevant regarding inputs on the receiving side or outputs on the sending site. And the shape of your graph does loo... See more...
These errors are completely unrelated. You'd need to dig deeper to find something relevant regarding inputs on the receiving side or outputs on the sending site. And the shape of your graph does look awfully close to a situation with periodic batch input which then unloads with a limited thruput connection.
Hello Splunkers I have a dropdown that calculates week_start for the last whole year. Then it has to pick "last_week" as default. I noticed that the dropdown, instead of remembering the label, add... See more...
Hello Splunkers I have a dropdown that calculates week_start for the last whole year. Then it has to pick "last_week" as default. I noticed that the dropdown, instead of remembering the label, adds the value to <default></default>  I've tried to calculate last_week as a token and added to <default></default>, which it picks up correctly.  But shows the epoch time in the dropdown instead of selecting the corresponding label "Last Week". Code for defining the dropdown search and initialising the token $last_week$: <fieldset submitButton="false"> <input type="dropdown" token="week"> <label>week</label> <fieldForLabel>time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <search> <query>| makeresults count=52 | fields - _time | streamstats count | eval count=count-1 | eval start_time = relative_time(now(),"-".count."w@w+1d") | eval time = case(count==1, "Last week", count==0, "Current week", 1==1, strftime(start_time,"%a %d-%b-%Y")) | table time, start_time | eval start_time=round(start_time,0)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>$last_week$</default> <initialValue>$last_week$</initialValue> </input> </fieldset> -- The token initialisation that calculates last week wrt now(): <init> <eval token="last_week">relative_time(now(),"-1w@w+1d")</eval> </init>