All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Apart from the fact that this is not quite valid JSON, what have you tried? what are you getting? what are you expecting?
Thanks, I figured it out using the stanzas. Don't know if this is the "sanctioned" way but if anyone else are interested, what solved it for me was adding host to each. Without it, it wouldn't work.... See more...
Thanks, I figured it out using the stanzas. Don't know if this is the "sanctioned" way but if anyone else are interested, what solved it for me was adding host to each. Without it, it wouldn't work. So changing this format [tcp://1.2.3.4:123] connection_host = ip index = index1 sourcetype = access_combined To this:     [tcp://1.2.3.4:123] connection_host = ip host = 1.2.3.4 index = index2 sourcetype = access_combined [tcp://5.6.7.8:123] connection_host = ip host = 5.6.7.8 index = index2 sourcetype = access_combined      
@yuanliuPlease find below an example when logs are generated in French, which causes issues during field extraction. This is why I converted them to XML to see if it could resolve the language proble... See more...
@yuanliuPlease find below an example when logs are generated in French, which causes issues during field extraction. This is why I converted them to XML to see if it could resolve the language problem. Do you have any other solutions to this issue, please? 04/29/2014 02:50:23 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4672 EventType=0 Type=Information ComputerName=sacreblue TaskCategory=Ouverture de session spéciale OpCode=Informations RecordNumber=2746 Keywords=Succès de l'audit Message=Privilèges spéciaux attribués à la nouvelle ouverture de session. Sujet : ID de sécurité : AUTORITE NT\Système Nom du compte : Système Domaine du compte : AUTORITE NT ID d'ouverture de session : 0x3e7 Privilèges : SeAssignPrimaryTokenPrivilege SeTcbPrivilege SeSecurityPrivilege SeTakeOwnershipPrivilege SeLoadDriverPrivilege SeBackupPrivilege SeRestorePrivilege SeDebugPrivilege SeAuditPrivilege SeSystemEnvironmentPrivilege SeImpersonatePrivilege  
Hi @Silah , ok, you can use syslog using different stanzas as you did, if the second one doesn't run, check if the firewall routes are open, you can check this using telnet on the source systems. ... See more...
Hi @Silah , ok, you can use syslog using different stanzas as you did, if the second one doesn't run, check if the firewall routes are open, you can check this using telnet on the source systems. In addition I hint to use an rsyslog or a syslog-ng server to take syslog events, instead Splunk TCP inputs, writing them on files and then reading those files with the HF; in this way you can continue to receive logs even if Splunk is down or in maintenance and you'll have less issue for the load of the Splunk Server. Ciao. Giuseppe
Thanks Guiseppe The Why: I do need different access grants for one, and I have limitations I am trying to overcome. My heavy forwarders are behind firewall and I have a directive to reduce as far po... See more...
Thanks Guiseppe The Why: I do need different access grants for one, and I have limitations I am trying to overcome. My heavy forwarders are behind firewall and I have a directive to reduce as far possible the amount of ports open, and ideally I want as little software footprint as possible (so no  splunk agents installed on the app servers) so I am trying to use existing syslog forwarder. The TCP forwarding is working fine for the POC but I need to scale it. Forgive my ignorance reg stanzas, but is that not I tried to do? adding the second [tcp://5.6.7.8:123] ? This didn't work
i have json data but all the data getting in single event not parsing properly each event  here is adding the event data. Please help what should i do to achieve in standard format in splunk this i... See more...
i have json data but all the data getting in single event not parsing properly each event  here is adding the event data. Please help what should i do to achieve in standard format in splunk this is in splunk cloud {"date_extract_linux":"2024-07-26 08:44:23.398743330","database": {"script_version":"1.0","global_parameters": {"check_name":"General_parameters","check_status":"OK","check_error":"","script_version":"1.0","host_name":"flosclnrhv03.pharma.aventis.com","database_name":"C2N48617","instance_name":"C2N48617","database_version":"19.0.0.0.0","database_major_version":"19","database_minor_version":"0"}, "queue_mem_check": {"check_name":"queue_mem_check","check_status":"OK","check_error":"","queue_owner":"LIVE2459_VAL","queue_name":"AQ$_Q_TASKREPORTWORKTASK_TAB_E","queue_sharable_mem":"4072"}, "queue_mem_check":  {"check_name":"queue_mem_check","check_status":"OK","check_error":"","queue_owner":"SYS","queue_name":"AQ$_ALERT_QT_E","queue_sharable_mem":"0"}, "fra_check": {"check_name":"fra_check","check_status":"OK","check_error":"","flash_in_gb":"40","flash_used_in_gb":".62","flash_reclaimable_gb":"0","percent_of_space_used":"1.56"}, "processes": {"check_name":"processes","check_status":"OK","check_error":"","process_percent":"27.3","process_current_value":"273","process_limit":"1000"}, "sessions": {"check_name":"sessions","check_status":"OK","check_error":"","sessions_percent":"16.41","sessions_current_value":"252","sessions_limit":"1536"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"1355","percent_used":"2"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"23606","percent_used":"36"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"UNDOTBS1","total_physical_all_mb":"65536","current_use_mb":"26","percent_used":"0"}, "cdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN2467","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"627","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"784","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"1546","percent_used":"2"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"USERS","total_physical_all_mb":"65536","current_use_mb":"1149","percent_used":"2"}, "pdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"705","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"INDX","total_physical_all_mb":"32767","current_use_mb":"378","percent_used":"1"}, "pdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"USRINDEX","total_physical_all_mb":"65536","current_use_mb":"128","percent_used":"0"}, } } Collapse
Hi @rangarbus , you should try to run these three searches in nested mode starting from the third: <third_search> [ search <second_search> [ search <first_search> | ... See more...
Hi @rangarbus , you should try to run these three searches in nested mode starting from the third: <third_search> [ search <second_search> [ search <first_search> | fields eventId ] | fields traceId ] | table fileName if eventId must be searched as raw text because it isn't in a field called eventId, you could use this one: <third_search> [ search <second_search> [ search <first_search> | rename eventId AS query | fields query ] | fields traceId ] | table fileName I hope that this nested search will run on not so many events because it will not be so performant; if you'll have many events, you shuld accelerate each search in a summary index or in a Data Model. Ciao. Giuseppe
Thank you for the clarification
I need help with assigning permissions in Splunk. 1. There is an user who needs to edit their dashboards and alerts in Splunk. This user has two applications dashboards and alerts that they need acc... See more...
I need help with assigning permissions in Splunk. 1. There is an user who needs to edit their dashboards and alerts in Splunk. This user has two applications dashboards and alerts that they need access to. I want to ensure that the user has the minimum permissions necessary to edit only those two dashboards and alerts. 2.  A user in our system has created an alert and wants to integrate it with ServiceNow. However, when attempting to select an account name in the integration settings, the user is unable to select an account name. So what all minimum permission is required for user. 
Hi @splunkreal , I'm sorry but it isn't possible. It's possible to override index value before indexing only on not coocked events (not passed throgh an HF or IDX) using the method descibed at http... See more...
Hi @splunkreal , I'm sorry but it isn't possible. It's possible to override index value before indexing only on not coocked events (not passed throgh an HF or IDX) using the method descibed at https://docs.splunk.com/Documentation/Splunk/9.2.2/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input Ciao. Giuseppe
@nabeel652  You don't really need the tokens, just add the selectFirstChoice option and make sure last week is sorted first and it will all work, see this dashboard example <form version="1.1" theme... See more...
@nabeel652  You don't really need the tokens, just add the selectFirstChoice option and make sure last week is sorted first and it will all work, see this dashboard example <form version="1.1" theme="light"> <label>LastWeek</label> <fieldset submitButton="false"> <input type="dropdown" token="week"> <label>week</label> <fieldForLabel>time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <selectFirstChoice>1</selectFirstChoice> <search> <query>| makeresults count=52 | fields - _time | streamstats count | eval count=count-1 | eval start_time = relative_time(now(),"-".count."w@w+1d") | eval time = case(count==1, "Last week", count==0, "Current week", 1==1, strftime(start_time,"%a %d-%b-%Y")) | eval order=if(count=1, -1, count) | sort order | table time, start_time | eval start_time=round(start_time,0) </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <change> <set token="week_name">$label$</set> </change> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults | fields - _time | eval selection=$week|s$, name=$week_name|s$ | eval Value=strftime(selection, "%F %T")</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form> @ 
Hi @sintjm , as also @yuanliu said, you need a correlation key to correlate the events, if you have, you can use it in a stats command and this is the best solution: <your_search> | stats valu... See more...
Hi @sintjm , as also @yuanliu said, you need a correlation key to correlate the events, if you have, you can use it in a stats command and this is the best solution: <your_search> | stats values(Resp_time) AS Resp_time values(Req_time) AS Req_time BY key | eval diff=Resp_time-Req_time If you haven't and you're sure that events are always sequential, you could use the transaction command: <your_search> | transaction maxevents=2 | table duration Ciao. Giuseppe  
Hi @nabeel652, You should use valid values from the dropdown contents as default and initial settings.  Changing last_week token init to formatted value will help you, please try below; <fieldset ... See more...
Hi @nabeel652, You should use valid values from the dropdown contents as default and initial settings.  Changing last_week token init to formatted value will help you, please try below; <fieldset submitButton="false"> <input type="dropdown" token="week"> <label>week</label> <fieldForLabel>time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <search> <query>| makeresults count=52 | fields - _time | streamstats count | eval count=count-1 | eval start_time = relative_time(now(),"-".count."w@w+1d") | eval time = case(count==1, "Last week", count==0, "Current week", 1==1, strftime(start_time,"%a %d-%b-%Y")) | table time, start_time | eval start_time=round(start_time,0)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>$last_week$</default> <initialValue>$last_week$</initialValue> </input> </fieldset> -- The token initialisation that calculates last week wrt now(): <init> <eval token="last_week">strftime(relative_time(now(),"-1w@w+1d"),"%a %d-%b-%Y")</eval> </init>  
These errors are completely unrelated. You'd need to dig deeper to find something relevant regarding inputs on the receiving side or outputs on the sending site. And the shape of your graph does loo... See more...
These errors are completely unrelated. You'd need to dig deeper to find something relevant regarding inputs on the receiving side or outputs on the sending site. And the shape of your graph does look awfully close to a situation with periodic batch input which then unloads with a limited thruput connection.
Hello Splunkers I have a dropdown that calculates week_start for the last whole year. Then it has to pick "last_week" as default. I noticed that the dropdown, instead of remembering the label, add... See more...
Hello Splunkers I have a dropdown that calculates week_start for the last whole year. Then it has to pick "last_week" as default. I noticed that the dropdown, instead of remembering the label, adds the value to <default></default>  I've tried to calculate last_week as a token and added to <default></default>, which it picks up correctly.  But shows the epoch time in the dropdown instead of selecting the corresponding label "Last Week". Code for defining the dropdown search and initialising the token $last_week$: <fieldset submitButton="false"> <input type="dropdown" token="week"> <label>week</label> <fieldForLabel>time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <search> <query>| makeresults count=52 | fields - _time | streamstats count | eval count=count-1 | eval start_time = relative_time(now(),"-".count."w@w+1d") | eval time = case(count==1, "Last week", count==0, "Current week", 1==1, strftime(start_time,"%a %d-%b-%Y")) | table time, start_time | eval start_time=round(start_time,0)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>$last_week$</default> <initialValue>$last_week$</initialValue> </input> </fieldset> -- The token initialisation that calculates last week wrt now(): <init> <eval token="last_week">relative_time(now(),"-1w@w+1d")</eval> </init>
please post the solution
looks like a permission issue. may i know if the lookup file is shared with rights apps/users, pls check it, thanks. 
This is because you have a multisegment data path and eval doesn't like it.  Use a single quote to tell eval log.level is a field name not some random string. index="prod_k8s_onprm_dig-k8-prod1" "k8... See more...
This is because you have a multisegment data path and eval doesn't like it.  Use a single quote to tell eval log.level is a field name not some random string. index="prod_k8s_onprm_dig-k8-prod1" "k8s.namespace.name"="apl-secure-dig-svc-prod1" "k8s.container.name"="abc-def-cust-prof" NOT k8s.container.name=istio-proxy NOT log.level IN(DEBUG,INFO) (error OR exception)(earliest="07/25/2024:11:30:00" latest="07/25/2024:12:30:00") | addinfo | bin _time span=30m@m | stats count(eval('log.level'="ERROR")) as error_count by _time | eventstats stdev(error_count)  
Typical GDI troubleshooting steps include: Verify the input configuration, including the URL and credentials. Verify the Splunk server running the add-on can connect to the MS server.  Use curl or... See more...
Typical GDI troubleshooting steps include: Verify the input configuration, including the URL and credentials. Verify the Splunk server running the add-on can connect to the MS server.  Use curl or a similar tool. Check splunkd.log for related messages. Check the MS logs for related messages. If you're using Splunk search to see if data is coming in then double-check the SPL.  Verify the index name.  Try specifying latest=+1y to account for timestamp errors.
thanks to read and have a good day 1. we makes lookup file to use CSV File (in test enviroment), after then copy&paste in real server but SPL doesn't recognized the lookup file, even i double ch... See more...
thanks to read and have a good day 1. we makes lookup file to use CSV File (in test enviroment), after then copy&paste in real server but SPL doesn't recognized the lookup file, even i double check up the true location in apps "/opt/splunk/etc/apps (my apps you know..)/lookups "  am i need to more check something ?  or i need to makes new one CSV  and also file's owner has diffrent in a new version does it any relastion with chmod in linux? 1.  기존 개발 환경에서 만들어둔 CSV 파일을 복사 붙여넣기를 해서 다른 환경으로 옮겼는데 lookup 파일을 인식하지 못하는거 같습니다. 경로는 더블체크 해봤으나 틀린점이없었습니다. 혹시 확인해야될 사항이 더있을까요  기존 사항에서는 owner 부분이 admin 으로 되어있던데 이게 파일권한 chmod와 상관이 있을까요   2. i cant fully understand about how's working on lookup's range i mean if we make or apply to lookups file in "A" Apps. then "B" Apps can also use that lookup file? i dont understand what's meaning to grouping file in each Apps and lookup files 2. lookup 파일의 적용 범위에 대해 궁금합니다. 관리자페이지에 lookup 파일을 넣고 모든 permission을 all로 지정했을때 다른 사용자 페이지에서 lookup을 시행할 경우 관리자페이지에 들어가 있는 lookup table을 참조해서 가져오는게 맞는걸까요?