All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

looks like a permission issue. may i know if the lookup file is shared with rights apps/users, pls check it, thanks. 
This is because you have a multisegment data path and eval doesn't like it.  Use a single quote to tell eval log.level is a field name not some random string. index="prod_k8s_onprm_dig-k8-prod1" "k8... See more...
This is because you have a multisegment data path and eval doesn't like it.  Use a single quote to tell eval log.level is a field name not some random string. index="prod_k8s_onprm_dig-k8-prod1" "k8s.namespace.name"="apl-secure-dig-svc-prod1" "k8s.container.name"="abc-def-cust-prof" NOT k8s.container.name=istio-proxy NOT log.level IN(DEBUG,INFO) (error OR exception)(earliest="07/25/2024:11:30:00" latest="07/25/2024:12:30:00") | addinfo | bin _time span=30m@m | stats count(eval('log.level'="ERROR")) as error_count by _time | eventstats stdev(error_count)  
Typical GDI troubleshooting steps include: Verify the input configuration, including the URL and credentials. Verify the Splunk server running the add-on can connect to the MS server.  Use curl or... See more...
Typical GDI troubleshooting steps include: Verify the input configuration, including the URL and credentials. Verify the Splunk server running the add-on can connect to the MS server.  Use curl or a similar tool. Check splunkd.log for related messages. Check the MS logs for related messages. If you're using Splunk search to see if data is coming in then double-check the SPL.  Verify the index name.  Try specifying latest=+1y to account for timestamp errors.
thanks to read and have a good day 1. we makes lookup file to use CSV File (in test enviroment), after then copy&paste in real server but SPL doesn't recognized the lookup file, even i double ch... See more...
thanks to read and have a good day 1. we makes lookup file to use CSV File (in test enviroment), after then copy&paste in real server but SPL doesn't recognized the lookup file, even i double check up the true location in apps "/opt/splunk/etc/apps (my apps you know..)/lookups "  am i need to more check something ?  or i need to makes new one CSV  and also file's owner has diffrent in a new version does it any relastion with chmod in linux? 1.  기존 개발 환경에서 만들어둔 CSV 파일을 복사 붙여넣기를 해서 다른 환경으로 옮겼는데 lookup 파일을 인식하지 못하는거 같습니다. 경로는 더블체크 해봤으나 틀린점이없었습니다. 혹시 확인해야될 사항이 더있을까요  기존 사항에서는 owner 부분이 admin 으로 되어있던데 이게 파일권한 chmod와 상관이 있을까요   2. i cant fully understand about how's working on lookup's range i mean if we make or apply to lookups file in "A" Apps. then "B" Apps can also use that lookup file? i dont understand what's meaning to grouping file in each Apps and lookup files 2. lookup 파일의 적용 범위에 대해 궁금합니다. 관리자페이지에 lookup 파일을 넣고 모든 permission을 all로 지정했을때 다른 사용자 페이지에서 lookup을 시행할 경우 관리자페이지에 들어가 있는 lookup table을 참조해서 가져오는게 맞는걸까요? 
Hi there! This was published as a known issue first in 9.0.2: https://docs.splunk.com/Documentation/Splunk/9.0.2/ReleaseNotes/KnownIssues See the entry for SPL-235416. The preview UI in Ingest A... See more...
Hi there! This was published as a known issue first in 9.0.2: https://docs.splunk.com/Documentation/Splunk/9.0.2/ReleaseNotes/KnownIssues See the entry for SPL-235416. The preview UI in Ingest Actions has since been fixed in: Splunk Enterprise version 9.0.5+ Splunk Cloud Platform version 9.0.2303+  
Hello, we receive data using _TCP_ROUTING from forwarders from another team using another Splunk cluster. We don't use same indexes. Instead of routing data based on source or host we receive on ou... See more...
Hello, we receive data using _TCP_ROUTING from forwarders from another team using another Splunk cluster. We don't use same indexes. Instead of routing data based on source or host we receive on our indexers, is it possible to route data from one index (specified in their inputs.conf) to our own index? Especially what would be the props.conf stanza? Thanks.  
I just got the exact same issue trying to upgrade from 7.2.0 to 9.2.2.   I don't know yet if it's the solution, but we're requesting the 7.2.0 MSI file in order to satisfy the pop-up. In our case, ... See more...
I just got the exact same issue trying to upgrade from 7.2.0 to 9.2.2.   I don't know yet if it's the solution, but we're requesting the 7.2.0 MSI file in order to satisfy the pop-up. In our case, we do not want to risk having a corrupt installation by only deleting the Splunk files.
After disabling the Splunk readiness app due to a vulnerability recommendation, i restarted my search head which had the KV store. Once i restarted, the splunkd started with no error but the search h... See more...
After disabling the Splunk readiness app due to a vulnerability recommendation, i restarted my search head which had the KV store. Once i restarted, the splunkd started with no error but the search head web interface will not come back on. Apart from the app change, nothing else has changed. Any recommendations on how to address this?  search peer XXXXX has the ollowing message KV store changed status to failed. An error occurred during the last operation (‘getServerVersion’,domain ‘15’ code ‘13053’) No suitable servers found(ServerSeectionTryOnce Set) connection closed calling ismaster on XXXXX:8191 Thank you
  why I am not getting any results , i see there are events   index="prod_k8s_onprm_dig-k8-prod1" "k8s.namespace.name"="apl-secure-dig-svc-prod1" "k8s.container.name"="abc-def-cust-prof" N... See more...
  why I am not getting any results , i see there are events   index="prod_k8s_onprm_dig-k8-prod1" "k8s.namespace.name"="apl-secure-dig-svc-prod1" "k8s.container.name"="abc-def-cust-prof" NOT k8s.container.name=istio-proxy NOT log.level IN(DEBUG,INFO) (error OR exception)(earliest="07/25/2024:11:30:00" latest="07/25/2024:12:30:00") | addinfo | bin _time span=30m@m | stats count(eval(log.level="ERROR")) as error_count by _time | eventstats stdev(error_count)
First, using subsearch should not be your first choice.  Second, Splunk is not procedural; forced recursion on command will result in some unmaintainable code. You need to provide additional informa... See more...
First, using subsearch should not be your first choice.  Second, Splunk is not procedural; forced recursion on command will result in some unmaintainable code. You need to provide additional information about your data in addition to that your second dataset doesn't have eventId readily extracted.  I assume that the first "search" and second have different source types.  I also assume that search period is roughly identical in all three.  But I don't understand what is the dataset for the third "search".  Is it yet another indexed source?  Is it some sort of lookup table? To ask answerable questions in this forum, follow the following golden rules that I call the Four Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
That's better.  So, you are looking at adjacent, and equal time intervals.  In this case, time bucket is perhaps the simplest.   Let me first give you a hard-coded example. index="prod_k8s_onprem_di... See more...
That's better.  So, you are looking at adjacent, and equal time intervals.  In this case, time bucket is perhaps the simplest.   Let me first give you a hard-coded example. index="prod_k8s_onprem_dii--prod1" "k8s.namespace.name"="abc-secure-dig-servi-prod1" "k8s.container.name"="abc-cdf-cust-profile" (earliest="07/25/2024:11:30:00" latest="07/25/2024:12:30:00") | addinfo | bin _time span=30m@m | stats count(eval(status="Error")) as error_count by _time | eventstats stdev(error_count) Is this something you are looking for? index="prod_k8s_onprem_dii--prod1" "k8s.namespace.name"="abc-secure-dig-servi-prod1" "k8s.container.name"="abc-cdf-cust-profile" (earliest=first_earliest latest=first_latest) OR (earliest=second_earliest latest=second_latest) | eval period=if(_time>=first_earliest AND _time<first_latest,"First","Second") | stats count(eval(status="Error")) as error_count count as event_count by period  
I have 3 separate queries. I need to run them one after the other.  1. First query returns a field from each event that matches the search, say eventId 2. I need to make another query to identify e... See more...
I have 3 separate queries. I need to run them one after the other.  1. First query returns a field from each event that matches the search, say eventId 2. I need to make another query to identify events which has this eventId in the event , not a specific field. There will be zero or one row that will be returned in this case. I want to read a field on that event say "traceId". 3. Now i need to make a 3rd query using that returned traceId.  There will be only one event. With the result returned, i need to fetch the "fileName" from that matched event.  This fileName is the final result that i need.  Any guidelines / example to do this.  Known issue: On the search 2,  eventId from search 1 is not searchable as a field rather should be search on the _raw events as such.  I tried sub-search , but always result on OR statement on a field. But i dont have such field on the _raw event for search 2. Apologies if i sounded this confusing. 
Assuming your search gives you some useful fields to make the determination based on some logic, the next step is to follow my Four Commandments of posing answerable questions in this forum: Illust... See more...
Assuming your search gives you some useful fields to make the determination based on some logic, the next step is to follow my Four Commandments of posing answerable questions in this forum: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Usually it’s best to use totally different home directory for user splunk like /home/splunk and even set this user locked and use nologin or something similar as a login shell. I suppose that you hav... See more...
Usually it’s best to use totally different home directory for user splunk like /home/splunk and even set this user locked and use nologin or something similar as a login shell. I suppose that you have Unix admins or use google to switch home directory to correct one.
we are doing  some API/app deployment in  one region at 12: 00 PM EST , the 1 st time frame would be 11:30 AM to 12:00 PM EST ( I need to get the error count ) the 2nd time frame would be 12:00 PM ... See more...
we are doing  some API/app deployment in  one region at 12: 00 PM EST , the 1 st time frame would be 11:30 AM to 12:00 PM EST ( I need to get the error count ) the 2nd time frame would be 12:00 PM to 12:30 PM EST( need to get error count ) we need to consider generated log volume as well . and get the deviation on the error count on these two time frames . let's say , if it exceeds certain threshold  , I will further proceed /stop the deployment . so the out put of query is deviation threshold or percentage 
we are doing  some API/app deployment in  one region at 12: 00 PM EST , the 1 st time frame would be 11:30 AM to 12:00 PM EST ( I need to get the error count ) the 2nd time frame would be 12:00AM t... See more...
we are doing  some API/app deployment in  one region at 12: 00 PM EST , the 1 st time frame would be 11:30 AM to 12:00 PM EST ( I need to get the error count ) the 2nd time frame would be 12:00AM to 12:30 PM EST( need to get error count ) we need to consider generated log volume as well . and get the deviation on the error count on these two time frames . let's say , if it exceeds certain threshold  , I will further proceed /stop the deployment . so the out put of query is deviation threshold or percentage 
I faced this same issue. Resolved it by adding list_storage_passwords capability to Non-admin Role
You will need a common value in the two types of events to correlate events.  For example, if each pair has a unique transaction ID, you can do | stats values(Resp_time) as Resp_time values(Req_time... See more...
You will need a common value in the two types of events to correlate events.  For example, if each pair has a unique transaction ID, you can do | stats values(Resp_time) as Resp_time values(Req_time) as Req_time by transaction_id | eval Resp_time - Req_time Alternatively, if you have some other ways to determine a pairing, e.g., the two always happen within a deterministic interval,  e.g., request comes in at 5 minute into the hour, a unique response is sent within the hour and NO other request would come in during the same hour, you can use that as criterion.  There may be other conditions where you would use transaction.  Unless you give us the exact condition, mathematically there is no solution.
Instead of stats, use eventstats. index="oap" | eventstats perc25(tt) as P25, perc50(tt) as P50, perc75(tt) as P75 by oper | foreach P25 P50 P75 [eval <<FIELD>>count = if... See more...
Instead of stats, use eventstats. index="oap" | eventstats perc25(tt) as P25, perc50(tt) as P50, perc75(tt) as P75 by oper | foreach P25 P50 P75 [eval <<FIELD>>count = if(tt><<FIELD>>, 1, 0)] | stats values(P*count) as P*count by oper P25 P50 P75
Something like this - obviously you will need to adjust it depending on your events and required time periods index="prod_k8s_onprem_dii--prod1" "k8s.namespace.name"="abc-secure-dig-servi-prod1" "k8... See more...
Something like this - obviously you will need to adjust it depending on your events and required time periods index="prod_k8s_onprem_dii--prod1" "k8s.namespace.name"="abc-secure-dig-servi-prod1" "k8s.container.name"="abc-cdf-cust-profile" (earliest=first_earliest latest=first_latest) OR (earliest=second_earliest latest=second_latest) | eval period=if(_time>=first_earliest AND _time<first_latest,"First","Second") | stats count(eval(status="Error")) as error_count count as event_count by period