All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi folks,  I am new to alert manager and I am trying to configure it - I have splunk cloud - hence my access to the config files is limited .  so far the alerts are all going across without any iss... See more...
Hi folks,  I am new to alert manager and I am trying to configure it - I have splunk cloud - hence my access to the config files is limited .  so far the alerts are all going across without any issue - but when I try to assign it to another person - it seem that it wont let me save any updates or let me reassign the alert entry. Any ideas which file I need to make changes to and what are the changes - I need to be very specific for the Splunk Support team for Cloud services   Any help would be greatly appreciated. 
I'm attempting to pass a variable/value between custom functions in a playbook. I've done this before without issue, but in this scenario I'm running into the following error: "local variable 'json'... See more...
I'm attempting to pass a variable/value between custom functions in a playbook. I've done this before without issue, but in this scenario I'm running into the following error: "local variable 'json' referenced before assignment" I'm attempting to pass an HTML string. But it's erroring on a line in the function I'm trying to pass it to, that's locked out/not editable. get_user_session__ip_list_testing = json.loads(phantom.get_run_data(key='get_user_session:ip_list_testing'))   Any ideas how I can accomplish what I'm after?   Thanks in advance  
Hi Everyone,  I have enabled token based authentication and created few tokens. I can see them in UI but wanted to know where in backend it is stored. I mean which conf file.
Hi, I am trying to configure Universal Forwarder and Heavy forwarder. In UF  I see: Active forwards: None Configured but inactive forwards: A.B.C.D:9997 splunkd.log: 07-23-2021 11:45:00.807 +... See more...
Hi, I am trying to configure Universal Forwarder and Heavy forwarder. In UF  I see: Active forwards: None Configured but inactive forwards: A.B.C.D:9997 splunkd.log: 07-23-2021 11:45:00.807 +0000 WARN AutoLoadBalancedConnectionStrategy [42092 TcpOutEloop] - Applying quarantine to ip=A.B.C.D port=9997 _numberOfFailures=2 07-23-2021 11:45:42.188 +0000 WARN TcpOutputProc [42091 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=A.B.C.D inside output group default-autolb-group from host_src=UF_name has been blocked for blocked_seconds=3000. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 07-23-2021 11:47:22.196 +0000 WARN TcpOutputProc [42091 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=A.B.C.D inside output group default-autolb-group from host_src=UF_name has been blocked for blocked_seconds=3100. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 07-23-2021 11:49:02.204 +0000 WARN TcpOutputProc [42091 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=A.B.C.D inside output group default-autolb-group from host_src=UF_name has been blocked for blocked_seconds=3200. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 07-23-2021 11:50:29.730 +0000 INFO AutoLoadBalancedConnectionStrategy [42092 TcpOutEloop] - Removing quarantine from idx=A.B.C.D:9997 07-23-2021 11:50:29.732 +0000 ERROR TcpOutputFd [42092 TcpOutEloop] - Read error. Connection reset by peer 07-23-2021 11:50:29.734 +0000 ERROR TcpOutputFd [42092 TcpOutEloop] - Read error. Connection reset by peer 07-23-2021 11:50:29.734 +0000 WARN AutoLoadBalancedConnectionStrategy [42092 TcpOutEloop] - Applying quarantine to ip=A.B.C.D port=9997 _numberOfFailures=2   tcpdump also showed me reset from HF side.    I have communication between UF and HF - all necessary ports are open.  [root@UF_name ~]# nc -z -v A.B.C.D 9997 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to A.B.C.D:9997. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. [root@UF_name ~]# nc -z -v A.B.C.D 8000 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to A.B.C.D6:8000. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. [root@UF_name ~]# nc -z -v A.B.C.D 8089 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to A.B.C.D:8089. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.   How to solve this problem? Any tips?  
I am doing the labs for Fundamentals Part 2 and I am not understanding something  I have to use the startswith and endswith options of the transaction command to display transactions that begin w... See more...
I am doing the labs for Fundamentals Part 2 and I am not understanding something  I have to use the startswith and endswith options of the transaction command to display transactions that begin with an addtocart action and end with a purchase action. The end result should look like this The successful query for that is      index=web sourcetype=access_combined | transaction clientip startswith=action="addtocart" endswith=action="purchase" | table clientip, JSESSIONID, product_name, action, duration, eventcount, price     However, when I try the following query     index=web sourcetype=access_combined | transaction clientip startswith="addtocart" endswith="purchase" | table clientip, JSESSIONID, product_name, action, duration, eventcount, price       the output (shown below) I get is not correct I am interested to know why omitting the "action" filter with startswith and endswith give me a different result and doesn't group them anymore? Thank you in advance for your help
Can you provide the An example of search query or script. If Linux server is shutdown or down or up. I am looking for the best way to setup an shutdown or down or up status alert for Linux server.
Hi I am using a base search to display a token. However, I noticed it flicks to 0 then the number I need. I need something like this -  <!--condition match=" $result.count$ != 0"--> but there is a... See more...
Hi I am using a base search to display a token. However, I noticed it flicks to 0 then the number I need. I need something like this -  <!--condition match=" $result.count$ != 0"--> but there is also a case where I need it to be 0. So how can I get it to display the number only when the job is 100% done. I have tried done, finalised but they all display the 0 at the wrong time and then it changes to the correct number. i have also tried to add in  <condition match=" $job.resultCount$ == 1"> but I still get the 0 and then the number I need   <search base="basesearch_MAIN"> <!-- FInd out how many process are being monitored --> <query>| stats count </query> <progress> <set token="Token_no_of_Process">$result.count$</set> </progress> </search>      
Hi, I am deploying from Splunk 8.1.4 from scratch in our lab and I am finding some difficulties to understand how the data inputs included in the TA are supposed to be managed. Following the offici... See more...
Hi, I am deploying from Splunk 8.1.4 from scratch in our lab and I am finding some difficulties to understand how the data inputs included in the TA are supposed to be managed. Following the official instructions I configured the input.conf and props.conf in /local ,  enabling two stanzas pointing to a test index. [WinEventLog://Application] [WinEventLog://Security] How can I find the new inputs in the GUI? I dont really understand how the TA binds with the UI. I dont see any new input in the local inputs. is this normal?  Also,I read that the index configuration were removed from the add-on and they need to be configured manually. I dont see any recommendation about which index names to use. does not really matter? I can imaging that Windows Apps might expect specific index names to work properly. sorry for the basic questions, I couldnt find the answer myself digging in the documentation. many thanks.     Thanks.
Hello all, THis is probably very easy or impossible in splunk, but I cant find any sufficient answers. I am trying to remove a single property from JSOn event(during parsing or I dont want it at al... See more...
Hello all, THis is probably very easy or impossible in splunk, but I cant find any sufficient answers. I am trying to remove a single property from JSOn event(during parsing or I dont want it at all), e.g. I want remove "country":  property and everything in it in every event which will come to splunk. Is something like that possible?  I have tried some SEDCM in props.conf but no succes. Do you have any ideas? Thank you very much.   { "random": 23, "random float": 28.173, "bool": false, "date": "1990-08-31", "regEx": "helloooooooooooooooooooooooooooooooooooooooooooooooooo world", "enum": "generator", "firstname": "Latisha", "lastname": "Alexandr", "city": "Tiraspol", "country": "Algeria", "countryCode": "MC", "email uses current data": "Latisha.Alexandr@gmail.com", "email from expression": "Latisha.Alexandr@yopmail.com", "array": [ "Dyann", "Christal", "Renie", "Tilly", "Margette" ], "array of objects": [ { "index": 0, "index start at 5": 5 }, { "index": 1, "index start at 5": 6 }, { "index": 2, "index start at 5": 7 } ], "Raquela": { "age": 50 } }
Hello Team, rex field=_raw "string_list=%25(?<new_field1>\w+)%25"   Above condition will get a word between %25 to %25, If I get any one/two letter word, I want to ignore/skip it. Thanks for the... See more...
Hello Team, rex field=_raw "string_list=%25(?<new_field1>\w+)%25"   Above condition will get a word between %25 to %25, If I get any one/two letter word, I want to ignore/skip it. Thanks for the help in advance. @field-extraction
Hi, I have a summary index which gets indexed once in a month. I have a query which runs based on current month looks back at last 6 months and provides me a report.  Is it possible to rewrite a que... See more...
Hi, I have a summary index which gets indexed once in a month. I have a query which runs based on current month looks back at last 6 months and provides me a report.  Is it possible to rewrite a query to show a trend which can go over each months' event and look back 6months of data for each month and provide a report?     Here is the query which looks back at last 6 months from current month. I would like to do the same for all months (look back from each month) and provide a trend index=summary source=sre_slo_BE_qlatency_permodule_monthly | where _time>=relative_time(now(),"-6mon@mon") | eval Month=Month + "-" + Year | chart values(p90Latency) as P90Latency by Month, Module useother=f limit=10000 | eval MonthYear=Month, Year=substr(Month,5,4), Month=substr(Month,0,3) | fields - Year | table MonthYear * | transpose 20 header_field=MonthYear, column_name=Module | foreach *20* [ eval Max=case(Max>=if(isnull('<<FIELD>>'),0,'<<FIELD>>'),Max,true(),if(isnull('<<FIELD>>'),0,'<<FIELD>>'))] | where Max>30000 | foreach *20* [eval <<FIELD>>=ROUND(('<<FIELD>>')/1000,2)] | fields - Max | rename Module as MainModule | eval RequestType="Business Event" | lookup SLOHighToleranceLookup RequestType OUTPUTNEW Module | eval Module=if(isnull(Module), "null", Module) | where MainModule != Module | fields - Module, RequestType | rename MainModule as Module | eval ViolationCount=0, LastViolatedMonth="", LastViolatedResponse=0, TotalViolationCount=0 | foreach *-2020 or *-2021 [ | eval LastViolatedMonth = if('<<FIELD>>'>30,"<<FIELD>>", LastViolatedMonth) , LastViolatedMonthNumber = substr(LastViolatedMonth, 0, 2) , ViolationCount=if(('<<FIELD>>'>30), ViolationCount+1, ViolationCount) , LastViolatedResponse=if('<<FIELD>>'>30,'<<FIELD>>', LastViolatedResponse) , Deviation=case(LastViolatedResponse>30,round(((LastViolatedResponse-30)/30)*100,1)) , Priority = case( (Deviation >= 100 AND ViolationCount >=1), "P1" , ((Deviation >= 75 AND Deviation < 100) AND ViolationCount >=3), "P1" , ((Deviation >= 75 AND Deviation < 100) AND (ViolationCount >= 0 AND ViolationCount < 3)), "P2" , ((Deviation >= 50 AND Deviation < 75) AND ViolationCount >= 3), "P2" , ((Deviation >= 50 AND Deviation < 75) AND (ViolationCount >= 0 AND ViolationCount < 3)), "P3" , ((Deviation >= 25 AND Deviation < 50) AND ViolationCount >= 3), "P3" , ((Deviation >= 25 AND Deviation < 50) AND (ViolationCount >= 1 AND ViolationCount < 3)), "P4" , ((Deviation > 0 AND Deviation < 25) AND ViolationCount >= 0), "P4" )] | eval LastViolatedMonthNumber = substr(LastViolatedMonth, 0, 2) , LastViolatedMonthYear = substr(LastViolatedMonth, 4, 4) | eval LastViolatedMonth = case(LastViolatedMonthNumber==01, "Jan", LastViolatedMonthNumber==02, "Feb", LastViolatedMonthNumber==3, "Mar", LastViolatedMonthNumber==4, "Apr", LastViolatedMonthNumber==5, "May", LastViolatedMonthNumber==6, "Jun", LastViolatedMonthNumber==7, "Jul", LastViolatedMonthNumber==8, "Aug", LastViolatedMonthNumber==9, "Sep", LastViolatedMonthNumber==10, "Oct", LastViolatedMonthNumber==11, "Nov", LastViolatedMonthNumber==12, "Dec") | eval LastViolatedMonth=LastViolatedMonth + "-" + LastViolatedMonthYear | fields Module, LastViolatedMonth, LastViolatedResponse, ViolationCount, Deviation, Priority, LastViolatedMonthNumber, LastViolatedMonthYear | sort - LastViolatedResponse | rename LastViolatedMonth as "Last Violation Month", LastViolatedResponse as "Last Violation p90ResponseTime (s)", Deviation as "Deviation (%)", ViolationCount as "Missed Count" | eval CurrentMonth = strftime(now(), "%m"), CurrentYear= strftime(now(), "%Y"), ViolationMonthDifference=if(CurrentYear>LastViolatedMonthYear, (12-LastViolatedMonthNumber)+CurrentMonth, CurrentMonth-LastViolatedMonthNumber) | where ViolationMonthDifference<=3 | eval Priority = if(Priority=="P1" AND LastViolatedMonthNumber != CurrentMonth-1 , "P2", Priority) | fields - LastViolatedMonthNumber, LastViolatedMonthYear, CurrentMonth, CurrentYear, ViolationMonthDifference Thanks
Hello, I used : |rest /services/server/info|table host kvStoreStatus to check kvstore status after upgrading splunk from 8.0.4.1 to 8.1.3. Results are coherent with messages for SHC. It's taken c... See more...
Hello, I used : |rest /services/server/info|table host kvStoreStatus to check kvstore status after upgrading splunk from 8.0.4.1 to 8.1.3. Results are coherent with messages for SHC. It's taken care of.   But for my indexers (in a cluster), the value is either "ready" or "failed". I'm not sure I understand what either really mean. And no clue as how to fix it, as my understanding is that the usual command for that are for search-heads. Does this mean I can apply the same procedure as : https://docs.splunk.com/Documentation/Splunk/8.2.1/Admin/ResyncKVstore Thanks Regards, Ema
HI, As mentioned in the subject, I want to perform operations on a list of values with a single value. To be clearer, here's my search: index="my_index" | stats limit=15 values(my_transaction) as t... See more...
HI, As mentioned in the subject, I want to perform operations on a list of values with a single value. To be clearer, here's my search: index="my_index" | stats limit=15 values(my_transaction) as transactions by group_name | eventstats median(transactions) as median_transaction by group_name | eval dv=(abs(transactions-median_transactions)) However, "dv" is empty. I am assuming this is because "transactions" is an array/ a list while "median_transaction" is a a single value, for each group. If my assumption is correct, what's the best way in performing the operation for each value in "transactions" with "median_transaction" for each group? 
Hi all, I currently have a pivot table which has column named 'alias'. I was wondering is there a way to sort that column such that the value "error" always shows up on top, then the remaining value... See more...
Hi all, I currently have a pivot table which has column named 'alias'. I was wondering is there a way to sort that column such that the value "error" always shows up on top, then the remaining values will show under it. The default alphabetical orders don't work here as in both alphabetical and reverse alphabetical order, error does not show up ontop and instead shows up somewhere in the middle.  Any help would be greatly appreciated!
Hello my loves I have one quick question   Lets say I have this two strings AUJ.UEIEJ.829839.239383 033.4788383.27383.8HJJJ WHat would be the correct regex expression to extract ONLY string ... See more...
Hello my loves I have one quick question   Lets say I have this two strings AUJ.UEIEJ.829839.239383 033.4788383.27383.8HJJJ WHat would be the correct regex expression to extract ONLY string of characters after the first dot and before the second dot.. that means from AUJ.UEIEJ.829839.239383 I want  UEIEJ from 033.4788383.27383.8HJJJ I want   4788383 Thank you my loves for the help! kindly, C
Hi, I need to know if it is possible to create bar chart with patterns to differentiate along with colors. I already have colors portion but need to fill the bars with patterns. Thanks in-advance!! ... See more...
Hi, I need to know if it is possible to create bar chart with patterns to differentiate along with colors. I already have colors portion but need to fill the bars with patterns. Thanks in-advance!! Example: Value with lowest count with fewer dots and value with higher counts with more dots. Any other pattern will work as well.
Hi All, Good day... I have a situation here.. The logs of a particular source-type in a index is getting disappeared. For ex..  Please find the below results for a query 2021-07-20        0 202... See more...
Hi All, Good day... I have a situation here.. The logs of a particular source-type in a index is getting disappeared. For ex..  Please find the below results for a query 2021-07-20        0 2021-07-21        10 2021-07-22        232 2021-07-23        3571 After some time like or 24 hrs if I try to run the same search I am getting the below results. 2021-07-20       0 2021-07-21       0 2021-07-22       2 2021-07-23       1524 the logs are being disappeared for the older days. Note the index max size is set to unlimited and there are no issues with the other source-types under the same source. Could you please check and let me know what is the issue here..
I have a dashboard with multiple inputs. These inputs are like filters on top of basic search. I want 1. if phone mdn and both devicemdn is provided then its a OR between them on top of the base sea... See more...
I have a dashboard with multiple inputs. These inputs are like filters on top of basic search. I want 1. if phone mdn and both devicemdn is provided then its a OR between them on top of the base search base search | search phonemdn=<value> OR devicemdn=<value> 2. if only phone mdn is provided then should be base search | search phonemdn=<value> 3. if only device mdn is provided then should be base search | search devicemdn=<value>   Here is my dashboard xml:   <form> <label>Dashboard</label> <fieldset submitButton="true" autoRun="true"> <input type="text" token="phonemdn" searchWhenChanged="false"> <label>PHONE MDN</label> <default></default> <change> <condition> <eval token="phonemdn_exp">if(len(trim($value$)) == 0,"","| search phonemdn=".$value$)</eval> </condition> </change> </input> <input type="text" token="devicemdn"> <label>DEVICE MDN</label> <default></default> <change> <condition> <eval token="devicemdn_exp">if(len(trim($value$)) == 0, "" , if(len(trim($phonemdn$)) == 0, "| search devicemdn=".$value$, "OR devicemdn=".$value$))</eval> </condition> </change> </input> <input type="dropdown" token="logtype" searchWhenChanged="true"> <label>LOG TYPE</label> <choice value="*">ALL</choice> <choice value="server">Watch</choice> <choice value="application">Application</choice> <change> <condition value="server"> <set token="filter_search_base">| search index=new | spath app | search app=newapp </set> <set token="logtype_lab">logtype=server</set> <set token="logtype_exp">| search source=Band | eval source="Band"</set> </condition> <condition value="application"> <set token="filter_search_base">| search index=main | spath app | search app!=simulator</set> <set token="logtype_lab">logtype=Application</set> <set token="logtype_exp">| search source=Application</set> </condition> <condition value="*"> <set token="filter_search_base">|multisearch [search index=new | spath app | search app=newapp] [search index=main | spath app | search app!=simulator]</set> <set token="logtype_lab">All Source</set> <set token="logtype_exp"></set> </condition> </change> </input> </fieldset> <row> <panel> <title>SEARCHING: $logtype_lab$ $phonemdn_exp$ $devicemdn_exp$</title> <search> <query>$filter_search_base$ $phonemdn_exp$ $devicemdn_exp$</query> <earliest>$timefield.earliest$</earliest> <latest>$timefield.latest$</latest> </search> </panel> </row> </form>     So my first query always works but later I feel like the input value for phonemdn and devicemdn is getting cached and future query didn't work as expected. if I have input both phonemdn and devicemdn : query is base search | search phonemdn=<value> OR devicemdn=<value> then if I delete value from phone mdn and only keep devicemdn then,  actual query : base search OR devicemdn=<value> expected query : base search | search devicemdn=<value> I feel like somehow the phonemdn value from the first query is getting cached somehow. Please help me to resolve this issue. let me know if you need more information. thanks!!
My use case is the following, I have login information regarding which ASN a user logged in today on the field ASN and data from the authentication datamodel, which gives me a "list" of ASNs  "values... See more...
My use case is the following, I have login information regarding which ASN a user logged in today on the field ASN and data from the authentication datamodel, which gives me a "list" of ASNs  "values(ASN)" AS Multi_ASN I was trying to use an eval to get a YES or NO answer, if the user have login from these ASNs. There was a lot of pain trying to get the command correct, but I ended up using this eval for this type of data ASN = A1234 Multi_ASN = A1234 A2345 A3456     | eval Logged_before_from_ASN=if(IN(ASN, (split(Multi_ASN," "))) , "YES", "NO")     So the split divides the values in Multi_ASN and that is compared by the "if(IN(" but unfortunately there is no highlighting for "IN" Any recommendations? This eval is working but I wonder if there is a better way to do this
Is there a way to specify a timezone in a datanmodel? I have an eval field called date relying on Splunk's _time field but I want to ensure that it matches a specific timezone, rather than relying o... See more...
Is there a way to specify a timezone in a datanmodel? I have an eval field called date relying on Splunk's _time field but I want to ensure that it matches a specific timezone, rather than relying on the extracted _time of the log as its in UTC. I want to have the timezone match Brisbane, Australia (+10)