All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @splunkreal , I'm sorry but it isn't possible. It's possible to override index value before indexing only on not coocked events (not passed throgh an HF or IDX) using the method descibed at http... See more...
Hi @splunkreal , I'm sorry but it isn't possible. It's possible to override index value before indexing only on not coocked events (not passed throgh an HF or IDX) using the method descibed at https://docs.splunk.com/Documentation/Splunk/9.2.2/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input Ciao. Giuseppe
@nabeel652  You don't really need the tokens, just add the selectFirstChoice option and make sure last week is sorted first and it will all work, see this dashboard example <form version="1.1" theme... See more...
@nabeel652  You don't really need the tokens, just add the selectFirstChoice option and make sure last week is sorted first and it will all work, see this dashboard example <form version="1.1" theme="light"> <label>LastWeek</label> <fieldset submitButton="false"> <input type="dropdown" token="week"> <label>week</label> <fieldForLabel>time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <selectFirstChoice>1</selectFirstChoice> <search> <query>| makeresults count=52 | fields - _time | streamstats count | eval count=count-1 | eval start_time = relative_time(now(),"-".count."w@w+1d") | eval time = case(count==1, "Last week", count==0, "Current week", 1==1, strftime(start_time,"%a %d-%b-%Y")) | eval order=if(count=1, -1, count) | sort order | table time, start_time | eval start_time=round(start_time,0) </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <change> <set token="week_name">$label$</set> </change> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults | fields - _time | eval selection=$week|s$, name=$week_name|s$ | eval Value=strftime(selection, "%F %T")</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form> @ 
Hi @sintjm , as also @yuanliu said, you need a correlation key to correlate the events, if you have, you can use it in a stats command and this is the best solution: <your_search> | stats valu... See more...
Hi @sintjm , as also @yuanliu said, you need a correlation key to correlate the events, if you have, you can use it in a stats command and this is the best solution: <your_search> | stats values(Resp_time) AS Resp_time values(Req_time) AS Req_time BY key | eval diff=Resp_time-Req_time If you haven't and you're sure that events are always sequential, you could use the transaction command: <your_search> | transaction maxevents=2 | table duration Ciao. Giuseppe  
Hi @nabeel652, You should use valid values from the dropdown contents as default and initial settings.  Changing last_week token init to formatted value will help you, please try below; <fieldset ... See more...
Hi @nabeel652, You should use valid values from the dropdown contents as default and initial settings.  Changing last_week token init to formatted value will help you, please try below; <fieldset submitButton="false"> <input type="dropdown" token="week"> <label>week</label> <fieldForLabel>time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <search> <query>| makeresults count=52 | fields - _time | streamstats count | eval count=count-1 | eval start_time = relative_time(now(),"-".count."w@w+1d") | eval time = case(count==1, "Last week", count==0, "Current week", 1==1, strftime(start_time,"%a %d-%b-%Y")) | table time, start_time | eval start_time=round(start_time,0)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>$last_week$</default> <initialValue>$last_week$</initialValue> </input> </fieldset> -- The token initialisation that calculates last week wrt now(): <init> <eval token="last_week">strftime(relative_time(now(),"-1w@w+1d"),"%a %d-%b-%Y")</eval> </init>  
These errors are completely unrelated. You'd need to dig deeper to find something relevant regarding inputs on the receiving side or outputs on the sending site. And the shape of your graph does loo... See more...
These errors are completely unrelated. You'd need to dig deeper to find something relevant regarding inputs on the receiving side or outputs on the sending site. And the shape of your graph does look awfully close to a situation with periodic batch input which then unloads with a limited thruput connection.
Hello Splunkers I have a dropdown that calculates week_start for the last whole year. Then it has to pick "last_week" as default. I noticed that the dropdown, instead of remembering the label, add... See more...
Hello Splunkers I have a dropdown that calculates week_start for the last whole year. Then it has to pick "last_week" as default. I noticed that the dropdown, instead of remembering the label, adds the value to <default></default>  I've tried to calculate last_week as a token and added to <default></default>, which it picks up correctly.  But shows the epoch time in the dropdown instead of selecting the corresponding label "Last Week". Code for defining the dropdown search and initialising the token $last_week$: <fieldset submitButton="false"> <input type="dropdown" token="week"> <label>week</label> <fieldForLabel>time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <search> <query>| makeresults count=52 | fields - _time | streamstats count | eval count=count-1 | eval start_time = relative_time(now(),"-".count."w@w+1d") | eval time = case(count==1, "Last week", count==0, "Current week", 1==1, strftime(start_time,"%a %d-%b-%Y")) | table time, start_time | eval start_time=round(start_time,0)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>$last_week$</default> <initialValue>$last_week$</initialValue> </input> </fieldset> -- The token initialisation that calculates last week wrt now(): <init> <eval token="last_week">relative_time(now(),"-1w@w+1d")</eval> </init>
please post the solution
looks like a permission issue. may i know if the lookup file is shared with rights apps/users, pls check it, thanks. 
This is because you have a multisegment data path and eval doesn't like it.  Use a single quote to tell eval log.level is a field name not some random string. index="prod_k8s_onprm_dig-k8-prod1" "k8... See more...
This is because you have a multisegment data path and eval doesn't like it.  Use a single quote to tell eval log.level is a field name not some random string. index="prod_k8s_onprm_dig-k8-prod1" "k8s.namespace.name"="apl-secure-dig-svc-prod1" "k8s.container.name"="abc-def-cust-prof" NOT k8s.container.name=istio-proxy NOT log.level IN(DEBUG,INFO) (error OR exception)(earliest="07/25/2024:11:30:00" latest="07/25/2024:12:30:00") | addinfo | bin _time span=30m@m | stats count(eval('log.level'="ERROR")) as error_count by _time | eventstats stdev(error_count)  
Typical GDI troubleshooting steps include: Verify the input configuration, including the URL and credentials. Verify the Splunk server running the add-on can connect to the MS server.  Use curl or... See more...
Typical GDI troubleshooting steps include: Verify the input configuration, including the URL and credentials. Verify the Splunk server running the add-on can connect to the MS server.  Use curl or a similar tool. Check splunkd.log for related messages. Check the MS logs for related messages. If you're using Splunk search to see if data is coming in then double-check the SPL.  Verify the index name.  Try specifying latest=+1y to account for timestamp errors.
thanks to read and have a good day 1. we makes lookup file to use CSV File (in test enviroment), after then copy&paste in real server but SPL doesn't recognized the lookup file, even i double ch... See more...
thanks to read and have a good day 1. we makes lookup file to use CSV File (in test enviroment), after then copy&paste in real server but SPL doesn't recognized the lookup file, even i double check up the true location in apps "/opt/splunk/etc/apps (my apps you know..)/lookups "  am i need to more check something ?  or i need to makes new one CSV  and also file's owner has diffrent in a new version does it any relastion with chmod in linux? 1.  기존 개발 환경에서 만들어둔 CSV 파일을 복사 붙여넣기를 해서 다른 환경으로 옮겼는데 lookup 파일을 인식하지 못하는거 같습니다. 경로는 더블체크 해봤으나 틀린점이없었습니다. 혹시 확인해야될 사항이 더있을까요  기존 사항에서는 owner 부분이 admin 으로 되어있던데 이게 파일권한 chmod와 상관이 있을까요   2. i cant fully understand about how's working on lookup's range i mean if we make or apply to lookups file in "A" Apps. then "B" Apps can also use that lookup file? i dont understand what's meaning to grouping file in each Apps and lookup files 2. lookup 파일의 적용 범위에 대해 궁금합니다. 관리자페이지에 lookup 파일을 넣고 모든 permission을 all로 지정했을때 다른 사용자 페이지에서 lookup을 시행할 경우 관리자페이지에 들어가 있는 lookup table을 참조해서 가져오는게 맞는걸까요? 
Hi there! This was published as a known issue first in 9.0.2: https://docs.splunk.com/Documentation/Splunk/9.0.2/ReleaseNotes/KnownIssues See the entry for SPL-235416. The preview UI in Ingest A... See more...
Hi there! This was published as a known issue first in 9.0.2: https://docs.splunk.com/Documentation/Splunk/9.0.2/ReleaseNotes/KnownIssues See the entry for SPL-235416. The preview UI in Ingest Actions has since been fixed in: Splunk Enterprise version 9.0.5+ Splunk Cloud Platform version 9.0.2303+  
Hello, we receive data using _TCP_ROUTING from forwarders from another team using another Splunk cluster. We don't use same indexes. Instead of routing data based on source or host we receive on ou... See more...
Hello, we receive data using _TCP_ROUTING from forwarders from another team using another Splunk cluster. We don't use same indexes. Instead of routing data based on source or host we receive on our indexers, is it possible to route data from one index (specified in their inputs.conf) to our own index? Especially what would be the props.conf stanza? Thanks.  
I just got the exact same issue trying to upgrade from 7.2.0 to 9.2.2.   I don't know yet if it's the solution, but we're requesting the 7.2.0 MSI file in order to satisfy the pop-up. In our case, ... See more...
I just got the exact same issue trying to upgrade from 7.2.0 to 9.2.2.   I don't know yet if it's the solution, but we're requesting the 7.2.0 MSI file in order to satisfy the pop-up. In our case, we do not want to risk having a corrupt installation by only deleting the Splunk files.
After disabling the Splunk readiness app due to a vulnerability recommendation, i restarted my search head which had the KV store. Once i restarted, the splunkd started with no error but the search h... See more...
After disabling the Splunk readiness app due to a vulnerability recommendation, i restarted my search head which had the KV store. Once i restarted, the splunkd started with no error but the search head web interface will not come back on. Apart from the app change, nothing else has changed. Any recommendations on how to address this?  search peer XXXXX has the ollowing message KV store changed status to failed. An error occurred during the last operation (‘getServerVersion’,domain ‘15’ code ‘13053’) No suitable servers found(ServerSeectionTryOnce Set) connection closed calling ismaster on XXXXX:8191 Thank you
  why I am not getting any results , i see there are events   index="prod_k8s_onprm_dig-k8-prod1" "k8s.namespace.name"="apl-secure-dig-svc-prod1" "k8s.container.name"="abc-def-cust-prof" N... See more...
  why I am not getting any results , i see there are events   index="prod_k8s_onprm_dig-k8-prod1" "k8s.namespace.name"="apl-secure-dig-svc-prod1" "k8s.container.name"="abc-def-cust-prof" NOT k8s.container.name=istio-proxy NOT log.level IN(DEBUG,INFO) (error OR exception)(earliest="07/25/2024:11:30:00" latest="07/25/2024:12:30:00") | addinfo | bin _time span=30m@m | stats count(eval(log.level="ERROR")) as error_count by _time | eventstats stdev(error_count)
First, using subsearch should not be your first choice.  Second, Splunk is not procedural; forced recursion on command will result in some unmaintainable code. You need to provide additional informa... See more...
First, using subsearch should not be your first choice.  Second, Splunk is not procedural; forced recursion on command will result in some unmaintainable code. You need to provide additional information about your data in addition to that your second dataset doesn't have eventId readily extracted.  I assume that the first "search" and second have different source types.  I also assume that search period is roughly identical in all three.  But I don't understand what is the dataset for the third "search".  Is it yet another indexed source?  Is it some sort of lookup table? To ask answerable questions in this forum, follow the following golden rules that I call the Four Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
That's better.  So, you are looking at adjacent, and equal time intervals.  In this case, time bucket is perhaps the simplest.   Let me first give you a hard-coded example. index="prod_k8s_onprem_di... See more...
That's better.  So, you are looking at adjacent, and equal time intervals.  In this case, time bucket is perhaps the simplest.   Let me first give you a hard-coded example. index="prod_k8s_onprem_dii--prod1" "k8s.namespace.name"="abc-secure-dig-servi-prod1" "k8s.container.name"="abc-cdf-cust-profile" (earliest="07/25/2024:11:30:00" latest="07/25/2024:12:30:00") | addinfo | bin _time span=30m@m | stats count(eval(status="Error")) as error_count by _time | eventstats stdev(error_count) Is this something you are looking for? index="prod_k8s_onprem_dii--prod1" "k8s.namespace.name"="abc-secure-dig-servi-prod1" "k8s.container.name"="abc-cdf-cust-profile" (earliest=first_earliest latest=first_latest) OR (earliest=second_earliest latest=second_latest) | eval period=if(_time>=first_earliest AND _time<first_latest,"First","Second") | stats count(eval(status="Error")) as error_count count as event_count by period  
I have 3 separate queries. I need to run them one after the other.  1. First query returns a field from each event that matches the search, say eventId 2. I need to make another query to identify e... See more...
I have 3 separate queries. I need to run them one after the other.  1. First query returns a field from each event that matches the search, say eventId 2. I need to make another query to identify events which has this eventId in the event , not a specific field. There will be zero or one row that will be returned in this case. I want to read a field on that event say "traceId". 3. Now i need to make a 3rd query using that returned traceId.  There will be only one event. With the result returned, i need to fetch the "fileName" from that matched event.  This fileName is the final result that i need.  Any guidelines / example to do this.  Known issue: On the search 2,  eventId from search 1 is not searchable as a field rather should be search on the _raw events as such.  I tried sub-search , but always result on OR statement on a field. But i dont have such field on the _raw event for search 2. Apologies if i sounded this confusing. 
Assuming your search gives you some useful fields to make the determination based on some logic, the next step is to follow my Four Commandments of posing answerable questions in this forum: Illust... See more...
Assuming your search gives you some useful fields to make the determination based on some logic, the next step is to follow my Four Commandments of posing answerable questions in this forum: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.