All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Team, rex field=_raw "string_list=%25(?<new_field1>\w+)%25"   Above condition will get a word between %25 to %25, If I get any one/two letter word, I want to ignore/skip it. Thanks for the... See more...
Hello Team, rex field=_raw "string_list=%25(?<new_field1>\w+)%25"   Above condition will get a word between %25 to %25, If I get any one/two letter word, I want to ignore/skip it. Thanks for the help in advance. @field-extraction
Hi, I have a summary index which gets indexed once in a month. I have a query which runs based on current month looks back at last 6 months and provides me a report.  Is it possible to rewrite a que... See more...
Hi, I have a summary index which gets indexed once in a month. I have a query which runs based on current month looks back at last 6 months and provides me a report.  Is it possible to rewrite a query to show a trend which can go over each months' event and look back 6months of data for each month and provide a report?     Here is the query which looks back at last 6 months from current month. I would like to do the same for all months (look back from each month) and provide a trend index=summary source=sre_slo_BE_qlatency_permodule_monthly | where _time>=relative_time(now(),"-6mon@mon") | eval Month=Month + "-" + Year | chart values(p90Latency) as P90Latency by Month, Module useother=f limit=10000 | eval MonthYear=Month, Year=substr(Month,5,4), Month=substr(Month,0,3) | fields - Year | table MonthYear * | transpose 20 header_field=MonthYear, column_name=Module | foreach *20* [ eval Max=case(Max>=if(isnull('<<FIELD>>'),0,'<<FIELD>>'),Max,true(),if(isnull('<<FIELD>>'),0,'<<FIELD>>'))] | where Max>30000 | foreach *20* [eval <<FIELD>>=ROUND(('<<FIELD>>')/1000,2)] | fields - Max | rename Module as MainModule | eval RequestType="Business Event" | lookup SLOHighToleranceLookup RequestType OUTPUTNEW Module | eval Module=if(isnull(Module), "null", Module) | where MainModule != Module | fields - Module, RequestType | rename MainModule as Module | eval ViolationCount=0, LastViolatedMonth="", LastViolatedResponse=0, TotalViolationCount=0 | foreach *-2020 or *-2021 [ | eval LastViolatedMonth = if('<<FIELD>>'>30,"<<FIELD>>", LastViolatedMonth) , LastViolatedMonthNumber = substr(LastViolatedMonth, 0, 2) , ViolationCount=if(('<<FIELD>>'>30), ViolationCount+1, ViolationCount) , LastViolatedResponse=if('<<FIELD>>'>30,'<<FIELD>>', LastViolatedResponse) , Deviation=case(LastViolatedResponse>30,round(((LastViolatedResponse-30)/30)*100,1)) , Priority = case( (Deviation >= 100 AND ViolationCount >=1), "P1" , ((Deviation >= 75 AND Deviation < 100) AND ViolationCount >=3), "P1" , ((Deviation >= 75 AND Deviation < 100) AND (ViolationCount >= 0 AND ViolationCount < 3)), "P2" , ((Deviation >= 50 AND Deviation < 75) AND ViolationCount >= 3), "P2" , ((Deviation >= 50 AND Deviation < 75) AND (ViolationCount >= 0 AND ViolationCount < 3)), "P3" , ((Deviation >= 25 AND Deviation < 50) AND ViolationCount >= 3), "P3" , ((Deviation >= 25 AND Deviation < 50) AND (ViolationCount >= 1 AND ViolationCount < 3)), "P4" , ((Deviation > 0 AND Deviation < 25) AND ViolationCount >= 0), "P4" )] | eval LastViolatedMonthNumber = substr(LastViolatedMonth, 0, 2) , LastViolatedMonthYear = substr(LastViolatedMonth, 4, 4) | eval LastViolatedMonth = case(LastViolatedMonthNumber==01, "Jan", LastViolatedMonthNumber==02, "Feb", LastViolatedMonthNumber==3, "Mar", LastViolatedMonthNumber==4, "Apr", LastViolatedMonthNumber==5, "May", LastViolatedMonthNumber==6, "Jun", LastViolatedMonthNumber==7, "Jul", LastViolatedMonthNumber==8, "Aug", LastViolatedMonthNumber==9, "Sep", LastViolatedMonthNumber==10, "Oct", LastViolatedMonthNumber==11, "Nov", LastViolatedMonthNumber==12, "Dec") | eval LastViolatedMonth=LastViolatedMonth + "-" + LastViolatedMonthYear | fields Module, LastViolatedMonth, LastViolatedResponse, ViolationCount, Deviation, Priority, LastViolatedMonthNumber, LastViolatedMonthYear | sort - LastViolatedResponse | rename LastViolatedMonth as "Last Violation Month", LastViolatedResponse as "Last Violation p90ResponseTime (s)", Deviation as "Deviation (%)", ViolationCount as "Missed Count" | eval CurrentMonth = strftime(now(), "%m"), CurrentYear= strftime(now(), "%Y"), ViolationMonthDifference=if(CurrentYear>LastViolatedMonthYear, (12-LastViolatedMonthNumber)+CurrentMonth, CurrentMonth-LastViolatedMonthNumber) | where ViolationMonthDifference<=3 | eval Priority = if(Priority=="P1" AND LastViolatedMonthNumber != CurrentMonth-1 , "P2", Priority) | fields - LastViolatedMonthNumber, LastViolatedMonthYear, CurrentMonth, CurrentYear, ViolationMonthDifference Thanks
Hello, I used : |rest /services/server/info|table host kvStoreStatus to check kvstore status after upgrading splunk from 8.0.4.1 to 8.1.3. Results are coherent with messages for SHC. It's taken c... See more...
Hello, I used : |rest /services/server/info|table host kvStoreStatus to check kvstore status after upgrading splunk from 8.0.4.1 to 8.1.3. Results are coherent with messages for SHC. It's taken care of.   But for my indexers (in a cluster), the value is either "ready" or "failed". I'm not sure I understand what either really mean. And no clue as how to fix it, as my understanding is that the usual command for that are for search-heads. Does this mean I can apply the same procedure as : https://docs.splunk.com/Documentation/Splunk/8.2.1/Admin/ResyncKVstore Thanks Regards, Ema
HI, As mentioned in the subject, I want to perform operations on a list of values with a single value. To be clearer, here's my search: index="my_index" | stats limit=15 values(my_transaction) as t... See more...
HI, As mentioned in the subject, I want to perform operations on a list of values with a single value. To be clearer, here's my search: index="my_index" | stats limit=15 values(my_transaction) as transactions by group_name | eventstats median(transactions) as median_transaction by group_name | eval dv=(abs(transactions-median_transactions)) However, "dv" is empty. I am assuming this is because "transactions" is an array/ a list while "median_transaction" is a a single value, for each group. If my assumption is correct, what's the best way in performing the operation for each value in "transactions" with "median_transaction" for each group? 
Hi all, I currently have a pivot table which has column named 'alias'. I was wondering is there a way to sort that column such that the value "error" always shows up on top, then the remaining value... See more...
Hi all, I currently have a pivot table which has column named 'alias'. I was wondering is there a way to sort that column such that the value "error" always shows up on top, then the remaining values will show under it. The default alphabetical orders don't work here as in both alphabetical and reverse alphabetical order, error does not show up ontop and instead shows up somewhere in the middle.  Any help would be greatly appreciated!
Hello my loves I have one quick question   Lets say I have this two strings AUJ.UEIEJ.829839.239383 033.4788383.27383.8HJJJ WHat would be the correct regex expression to extract ONLY string ... See more...
Hello my loves I have one quick question   Lets say I have this two strings AUJ.UEIEJ.829839.239383 033.4788383.27383.8HJJJ WHat would be the correct regex expression to extract ONLY string of characters after the first dot and before the second dot.. that means from AUJ.UEIEJ.829839.239383 I want  UEIEJ from 033.4788383.27383.8HJJJ I want   4788383 Thank you my loves for the help! kindly, C
Hi, I need to know if it is possible to create bar chart with patterns to differentiate along with colors. I already have colors portion but need to fill the bars with patterns. Thanks in-advance!! ... See more...
Hi, I need to know if it is possible to create bar chart with patterns to differentiate along with colors. I already have colors portion but need to fill the bars with patterns. Thanks in-advance!! Example: Value with lowest count with fewer dots and value with higher counts with more dots. Any other pattern will work as well.
Hi All, Good day... I have a situation here.. The logs of a particular source-type in a index is getting disappeared. For ex..  Please find the below results for a query 2021-07-20        0 202... See more...
Hi All, Good day... I have a situation here.. The logs of a particular source-type in a index is getting disappeared. For ex..  Please find the below results for a query 2021-07-20        0 2021-07-21        10 2021-07-22        232 2021-07-23        3571 After some time like or 24 hrs if I try to run the same search I am getting the below results. 2021-07-20       0 2021-07-21       0 2021-07-22       2 2021-07-23       1524 the logs are being disappeared for the older days. Note the index max size is set to unlimited and there are no issues with the other source-types under the same source. Could you please check and let me know what is the issue here..
I have a dashboard with multiple inputs. These inputs are like filters on top of basic search. I want 1. if phone mdn and both devicemdn is provided then its a OR between them on top of the base sea... See more...
I have a dashboard with multiple inputs. These inputs are like filters on top of basic search. I want 1. if phone mdn and both devicemdn is provided then its a OR between them on top of the base search base search | search phonemdn=<value> OR devicemdn=<value> 2. if only phone mdn is provided then should be base search | search phonemdn=<value> 3. if only device mdn is provided then should be base search | search devicemdn=<value>   Here is my dashboard xml:   <form> <label>Dashboard</label> <fieldset submitButton="true" autoRun="true"> <input type="text" token="phonemdn" searchWhenChanged="false"> <label>PHONE MDN</label> <default></default> <change> <condition> <eval token="phonemdn_exp">if(len(trim($value$)) == 0,"","| search phonemdn=".$value$)</eval> </condition> </change> </input> <input type="text" token="devicemdn"> <label>DEVICE MDN</label> <default></default> <change> <condition> <eval token="devicemdn_exp">if(len(trim($value$)) == 0, "" , if(len(trim($phonemdn$)) == 0, "| search devicemdn=".$value$, "OR devicemdn=".$value$))</eval> </condition> </change> </input> <input type="dropdown" token="logtype" searchWhenChanged="true"> <label>LOG TYPE</label> <choice value="*">ALL</choice> <choice value="server">Watch</choice> <choice value="application">Application</choice> <change> <condition value="server"> <set token="filter_search_base">| search index=new | spath app | search app=newapp </set> <set token="logtype_lab">logtype=server</set> <set token="logtype_exp">| search source=Band | eval source="Band"</set> </condition> <condition value="application"> <set token="filter_search_base">| search index=main | spath app | search app!=simulator</set> <set token="logtype_lab">logtype=Application</set> <set token="logtype_exp">| search source=Application</set> </condition> <condition value="*"> <set token="filter_search_base">|multisearch [search index=new | spath app | search app=newapp] [search index=main | spath app | search app!=simulator]</set> <set token="logtype_lab">All Source</set> <set token="logtype_exp"></set> </condition> </change> </input> </fieldset> <row> <panel> <title>SEARCHING: $logtype_lab$ $phonemdn_exp$ $devicemdn_exp$</title> <search> <query>$filter_search_base$ $phonemdn_exp$ $devicemdn_exp$</query> <earliest>$timefield.earliest$</earliest> <latest>$timefield.latest$</latest> </search> </panel> </row> </form>     So my first query always works but later I feel like the input value for phonemdn and devicemdn is getting cached and future query didn't work as expected. if I have input both phonemdn and devicemdn : query is base search | search phonemdn=<value> OR devicemdn=<value> then if I delete value from phone mdn and only keep devicemdn then,  actual query : base search OR devicemdn=<value> expected query : base search | search devicemdn=<value> I feel like somehow the phonemdn value from the first query is getting cached somehow. Please help me to resolve this issue. let me know if you need more information. thanks!!
My use case is the following, I have login information regarding which ASN a user logged in today on the field ASN and data from the authentication datamodel, which gives me a "list" of ASNs  "values... See more...
My use case is the following, I have login information regarding which ASN a user logged in today on the field ASN and data from the authentication datamodel, which gives me a "list" of ASNs  "values(ASN)" AS Multi_ASN I was trying to use an eval to get a YES or NO answer, if the user have login from these ASNs. There was a lot of pain trying to get the command correct, but I ended up using this eval for this type of data ASN = A1234 Multi_ASN = A1234 A2345 A3456     | eval Logged_before_from_ASN=if(IN(ASN, (split(Multi_ASN," "))) , "YES", "NO")     So the split divides the values in Multi_ASN and that is compared by the "if(IN(" but unfortunately there is no highlighting for "IN" Any recommendations? This eval is working but I wonder if there is a better way to do this
Is there a way to specify a timezone in a datanmodel? I have an eval field called date relying on Splunk's _time field but I want to ensure that it matches a specific timezone, rather than relying o... See more...
Is there a way to specify a timezone in a datanmodel? I have an eval field called date relying on Splunk's _time field but I want to ensure that it matches a specific timezone, rather than relying on the extracted _time of the log as its in UTC. I want to have the timezone match Brisbane, Australia (+10)
Hello,  I just installed a new instance of Splunk Enterprise 8.2.1with Cisco ISE add-on module 4.1.0.  Nothing else.  Per documentation, I should see a Setup action for the ISE add-on.  But I don’t. ... See more...
Hello,  I just installed a new instance of Splunk Enterprise 8.2.1with Cisco ISE add-on module 4.1.0.  Nothing else.  Per documentation, I should see a Setup action for the ISE add-on.  But I don’t.   Any ideas on what I missed?  Really, I haven’t configured anything else.   I made sure I am logged into Splunk as an administrator.   Thanks, Jerry
I have setup the Graph API input for AuditSignIn.Logs and logs are not consistent and missing in splunk randomly. Getting this error in logs: 2021-07-22 15:21:56,991 level=ERROR pid=8208 tid=MainTh... See more...
I have setup the Graph API input for AuditSignIn.Logs and logs are not consistent and missing in splunk randomly. Getting this error in logs: 2021-07-22 15:21:56,991 level=ERROR pid=8208 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api pos=utils.py:wrapper:72 | datainput=b'SignInLogs' start_time=1626991803 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/graph_api.py", line 235, in run return consumer.run() File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/graph_api.py", line 114, in run self._ingest(message, source) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/graph_api.py", line 124, in _ingest self._event_writer.write_event(message.data, source=source) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/event_writer.py", line 161, in write_event self._write(data) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/event_writer.py", line 145, in _write self._dev.write(data) BrokenPipeError: [Errno 32] Broken pipe Any help?
I have the data with different event types in the data say A to M.. Wanted to find time diffrence which tookfor each event Example index=apple source=datapipe eventType=newyork                    ... See more...
I have the data with different event types in the data say A to M.. Wanted to find time diffrence which tookfor each event Example index=apple source=datapipe eventType=newyork                                   A eventType=california                                  B     B-A eventType=boston                                       C    C-B eventType=houston                                    D    D-C eventType=dallas                                        E      E-D eventType=austin                                        F     F-D eventType=Irvine                                         G    G-E eventType=Washington                            H    H-F eventType=Atlanta                                      I        I-H eventType=San Antonio                          J         J-I eventType=Brazil                                       K          K-I eventType=Mumbai                                   L       L-I eventType=Delhi                                        M        M-I Currently I'm using |streamstats range(_time) as diff window=2 ..however it gives the sequential order of the difference. I want the difference in time in the above format The eventTypes are Unique and I'm using append in my search in each eventType @sundareshr @ITWhisperer @Nisha18789 @MuS @jasongb @yuanliu @thetech @guilmxm  Thank you
Hello, I have 2 CSV lookups updating several times a day.  One (A) is from CMDB with the entire list of assets (hostname, ip, user, os, etc).  The other (B) is a list of installed clients for some p... See more...
Hello, I have 2 CSV lookups updating several times a day.  One (A) is from CMDB with the entire list of assets (hostname, ip, user, os, etc).  The other (B) is a list of installed clients for some product, also containing the hostname.  I would like to get a search/dashboard that lists hosts in A that are not found in B with some of  additional fields.  Have no found a way to do with with 2 lookups, any ideas?  Thanks! Lookup CSV A: Host1, Host2, Host3 Lookup CSV B: Host1, Host3 Search output: Host2  
I'm trying work with a bunch of system logs that are either ERROR or INFO logs. Each has a unique id # that is specific to a certain package. I'm trying to figure out a way to count how my these uni... See more...
I'm trying work with a bunch of system logs that are either ERROR or INFO logs. Each has a unique id # that is specific to a certain package. I'm trying to figure out a way to count how my these unique id #s are only present in INFO logs meaning that there was no issues associated with that id #.  There are multiple logs associated with each ID# so if that id# is in 5 INFO logs but 1 ERROR logs, it shouldn't be counted. But if it's in only 1 INFO log, that should be counted. I'm novice with Splunk and I need to figure this out for my internship ASAP so all help is appreciated.  Thanks!      
Hello, I hope you can help me to figure out what is going on. I have a distributed environment, a search head and two indexers.  I've recently upgraded to Splunk 8.1.3 from 7.3. But one of my two ... See more...
Hello, I hope you can help me to figure out what is going on. I have a distributed environment, a search head and two indexers.  I've recently upgraded to Splunk 8.1.3 from 7.3. But one of my two indexers its not working properly, the splunkd service is taking all the CPU and memory resources...  now the server its painfully slow... The search head I''m seeing messages like this: - The percentage of non high priority searches delayed (50%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=8065. Total delayed Searches=4070 - TCPOutAutoLB-0 Errors        
Hi,  I develop the app "Allkun MON for ISO8583" and published on the splunkbase some weeks ago. Now I'm doing development validations for an upgrade. I ran a scan with the app Python Upgrade Readine... See more...
Hi,  I develop the app "Allkun MON for ISO8583" and published on the splunkbase some weeks ago. Now I'm doing development validations for an upgrade. I ran a scan with the app Python Upgrade Readiness and this report ends with fail (not compatible with python3) but the application don't use python script and has no bin folder. How could this validation pass? Regards
Hi Team - I am trying to first search and  then aggregate results from following Splunk logs: Raw format:  "buildDimensionsAttributes: $attribute: $constraint: $result" sample message: message: ... See more...
Hi Team - I am trying to first search and  then aggregate results from following Splunk logs: Raw format:  "buildDimensionsAttributes: $attribute: $constraint: $result" sample message: message: buildDimensionsAttributes: 6393: AttributeConstraints(-1.0,99.92,2,DoubleFormat): 99.98 Here in the AttributeConstraints 1st index corresponds to minval here -1.0 2nd index corresponds to maxval here 99.92 3rd index corresponds to decimal here 2 I want to first filter $results which are out of range, here 99.98 is not between  [-1.0 , 99.92] and then aggregate (group by) various $attribute and then showcase something like below on the dashboard where we can apply our usual time filters. Attribute# | RecrdCountofOutofRange | TotalRecords Thanks AG
Hi, I have seen the dashboard which is running in Splunk but available publicly. https://covid-19.splunkforgood.com/coronavirus__covid_19_ I got the app and its source codes from the github. https... See more...
Hi, I have seen the dashboard which is running in Splunk but available publicly. https://covid-19.splunkforgood.com/coronavirus__covid_19_ I got the app and its source codes from the github. https://github.com/splunk/corona_virus I would like to on how the dashboard is available publicly and how the searches are running when we run this dashboard. Because it does need the authentication to view the dashboard and what happens when lot of people run this dashboard at the same time. #splunk4good