All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Can you provide me ant suggestions to resolve this issue?
So I'm new to the splunk on GCP still learning, one thing I'm trying to wrap my head around is this: GCP pubsub provides native support for HTTP push - it's pretty straightforward. Now Splunk GCP ha... See more...
So I'm new to the splunk on GCP still learning, one thing I'm trying to wrap my head around is this: GCP pubsub provides native support for HTTP push - it's pretty straightforward. Now Splunk GCP has the dataflow template which seems to be a data pipeline that just re-format the logs and push it through the Splunk HEC which is HTTP endpoint. From architectural pov,  introducing  dataflow template into the GDI is an extra layer when the log export seemingly can be done by pubsub http push, so what is the specific value add from dataflow template?
Hello, I have some issues to perform multi-line field extraction for XML, my in-line extraction is not getting any result; sample events and my in-line extraction are provided below. Any help would ... See more...
Hello, I have some issues to perform multi-line field extraction for XML, my in-line extraction is not getting any result; sample events and my in-line extraction are provided below. Any help would be appreciated.  Sample Events: <Event> <ID>0123011</ID> <Time>2023-10-28T05:22:37.97011</Time> <Application_Name>Test</Application_Name> <Host_Name>VS0SMADBEFT</Host_Name> </Event> <Event> <ID>01232113</ID> <Time>2023-10-28T05:22:37.99011</Time> <Application_Name>Test</Application_Name> <Host_Name>VS0SMADBEFT</Host_Name> </Event>   In Line Extraction I Used <ID>(?<ID>[^<]+)<\/ID>([\r\n]*)<Time>(?<Time>[^<]+)</Time>([\r\n]*)<Application_Name>(?<Application_Name>[^<]+)</Application_Name>([\r\n]*)<Host_Name>(?<Host_Name>[^<]+)</Host_Name>    
OH..! I get this token at <single> block. The code for this <signle> block looks like this. <single> <title>Max time</title> <search> <query>index=idx_prd_analysis sourcetype="type:prd_anal... See more...
OH..! I get this token at <single> block. The code for this <signle> block looks like this. <single> <title>Max time</title> <search> <query>index=idx_prd_analysis sourcetype="type:prd_analysis:delay_time" corp="delay" | where (plane_type==1) OR (plane_type==2) | eval total_time = round(takeOff_time - boarding_time, 3) | stats MAX(total_time)</query> <earliest>$_time.earliest$</earliest> <latest>$_time.latest$</latest> </search> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="height">154</option> <option name="numberPrecision">0.000</option> <option name="rangeValues">[500]</option> <option name="refresh.display">progressbar</option> <option name="unitPosition">before</option> <drilldown> <set token="max_value">$click.value$</set> </drilldown> </single>    Are there any solution to send max_value as normal format not origin? (Not 123,456 -> Wnat 123456)
Hello all! This will be a doozy, so get ready. We are running a search with tstats generated results,  from various troubleshooting we simplified it to the following     | tstats count by host | ... See more...
Hello all! This will be a doozy, so get ready. We are running a search with tstats generated results,  from various troubleshooting we simplified it to the following     | tstats count by host | rename host as hostname | outputlookup some_kvstore     The config of the kvstore is as follows:     # collections.conf [some_kvstore] field.hostname = string         # transforms.conf [some_kvstore] collection = some_kvstore external_type = kvstore fields_list = hostname     When you run the first 2 lines of the SPL, you will get quite a few results, as it queries the internal db for hosts and retrieves a count of their logs. After you add the outputlookup command, it removes all your results and will not add them to the kvstore.  As my coworker found, there is a way to write the results to the kvstore after all, however the SPL for that is quite cursed, as it involves joining the original search back in, but the new results will be written to the kvstore.     | tstats count by host | rename host as hostname | table hostname | join hostname [ tstats count by host | rename host as hostname] | outputlookup some_kvstore       As far as I aware, 9.1.2, 9.0.6, and latest verisions of cloud have this issue even as fresh installs of Splunk, however it does work on an 8.2.1 and 7.3.3 systems (dont ask). The Splunk user owns everything in the Splunkdir so there is no problem with writing to any files, the kvstore permissions are global, and any user can read or write to it. So after several hours of troubleshooting, we are stumped here and not sure where we should look next. Changing to a csv is unfortunately not an option.   Things we have tried so far, that i can remember: Completely fresh installs of Splunk Cleaning the kvstore via `splunk clean kvstore -local` Outputting to a csv (works) Using makeresults to create the fields manually and add to the kvstore (works) Using the noop command to disable all search optimization  Writing to the kvstore via API (works) Reading data from the kvstore via inputlookup (works) Modifying an entry in the kvstore via the lookup editor app (works) Testing with all search modes (fast, smart, verbose)
I am trying to remove window EventCodes 4688 and 4627. Nothing I have tried has worked. Her are the things that I have tried. This is on the inputs.conf. blacklist = EventCode="4688" Message="(... See more...
I am trying to remove window EventCodes 4688 and 4627. Nothing I have tried has worked. Her are the things that I have tried. This is on the inputs.conf. blacklist = EventCode="4688" Message="(?:New Process Name:).+(?:SplunkUniversalForwarder\bin\splunk.exe)|.+(?:SplunkUniversalForwarder\bin\splunkd.exe)|.+(?:SplunkUniversalForwarder\bin\btool.exe)|.+(?:Splunk\bin\splunk.exe)|.+(?:Splunk\bin\splunkd.exe)|.+(?:Splunk\bin\btool.exe)|.+(?:Agent\MonitoringHost.exe)" blacklist1= EventCode="4688" blacklist2= EventCode="4627" blacklist= EventCode=4627,4688 blacklist = EventCode=4627|4688 blacklist= EventCode=%^(4627|4688)$% blacklist= EventCode=%^4627$% blacklist= EventCode=%^4688$%
That did it. Much appreciated. 
Hi @Govind.samy,   I wanted to share these AppD Docs pages, as they might share some insight. https://docs.appdynamics.com/appd/onprem/latest/en/end-user-monitoring/eum-accounts-license... See more...
Hi @Govind.samy,   I wanted to share these AppD Docs pages, as they might share some insight. https://docs.appdynamics.com/appd/onprem/latest/en/end-user-monitoring/eum-accounts-licenses-and-app-keys https://docs.appdynamics.com/appd/onprem/latest/en/appdynamics-licensing/license-entitlements-and-restrictions
Technically you could do the following to fix the symptoms | where time >= tonmumber(replace($max_value$, ",", "")) - 0.001 but you are better off finding the source of the token, as @PickleRick sa... See more...
Technically you could do the following to fix the symptoms | where time >= tonmumber(replace($max_value$, ",", "")) - 0.001 but you are better off finding the source of the token, as @PickleRick says, and make sure it contains something suitable to perform calculations with if that's how you intend to use it.  
You first mention colouring the column, then the row - if you want to colour the column then you can do it if your importer is a single value field - from your search you are doing stats values().. a... See more...
You first mention colouring the column, then the row - if you want to colour the column then you can do it if your importer is a single value field - from your search you are doing stats values().. as importer, but the principle of colouring a column (not row) based on its relation to another field is to make the column you want to colour a multivalue field by appending the indicator, e.g. | eval importer=mvappend(importer, importer_in_csv) and to then limit the number of values shown for that field to 1 with some CSS, e.g. <html depends="$hidden$"> <style> #coloured_cell table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> and then to use a format statement in the table definition <format type="color" field="importer"> <colorPalette type="expression">case(mvindex(value, 1) == "0", "#FF0000", mvindex(value, 1) == "1", "#00FF00")</colorPalette> </format> However, it's not clear from your search what your data actually looks like as the join subsearch is not terminated, so it's not clear where it ends and you don't appear to have any common fields to join with.
Typo correction: Hello Giuseppe, Noticed it's been over 8 years since you posted your question, but came across this post while searching on how to make a text box empty by default......same as you... See more...
Typo correction: Hello Giuseppe, Noticed it's been over 8 years since you posted your question, but came across this post while searching on how to make a text box empty by default......same as you were looking to do. Was working on a dashboard today, and thought what character is not ever in event data, and is not a character used by SPL for any reason.  The answer was the:  ~ This worked for me, like a charm, in a dashboard text box:    <initialValue>~</initialValue> <default>~</default> Best regards,      Dennis
I agree, that you would expect it to return the entire MV field, not just the first value. I suspect this may be a bug that has existed forever, but one which has a workaround. If you have a suppor... See more...
I agree, that you would expect it to return the entire MV field, not just the first value. I suspect this may be a bug that has existed forever, but one which has a workaround. If you have a support entitlement with Splunk, you could raise that as a bug and see what they say This is a simple working example from your data that exhibits the problem | makeresults format=csv data="_time,name,status,nameStatus 2023-12-06 16:06:20,A:B:C,UP:DOWN:UP,A;UP:B;DOWN:C;UP 2023-12-06 16:03:20,A:B:C,UP:UP:UP,A;UP:B;UP:C;UP 2023-12-06 16:00:20,A:B:C,DOWN:UP:UP,A;DOWN:B;UP:C;UP" | foreach * [ eval <<FIELD>>=split(<<FIELD>>, ":") ] ```| eval nameStatus=mvjoin(nameStatus,"##")``` | stats latest(nameStatus) as nameStatus ```| eval nameStatus=split(nameStatus, "##")```
@Bo3432 The where clause takes an eval statement and in eval statements you need to wrap 'odd' field names with single quotes. In your case properties.userAgent contains a full-stop, so you need to u... See more...
@Bo3432 The where clause takes an eval statement and in eval statements you need to wrap 'odd' field names with single quotes. In your case properties.userAgent contains a full-stop, so you need to use | where isnotnull('properties.userAgent') AND 'properties.userAgent'!=""
That's actually a good (and working) idea! Thank you very much! I don't know why latest didn't work either cause technically it should just check with the time and return the whole thing, right? B... See more...
That's actually a good (and working) idea! Thank you very much! I don't know why latest didn't work either cause technically it should just check with the time and return the whole thing, right? But yes, it works now, thank you very much!  
First, let me clarify that this problem is solvable as stated.  But you may want to reconsider how "macro 1" and "macro 2" is structured to make this easier.  You may also want to structure a differe... See more...
First, let me clarify that this problem is solvable as stated.  But you may want to reconsider how "macro 1" and "macro 2" is structured to make this easier.  You may also want to structure a different search to make this function more efficient. Back to the stated problem.  The idea is to "tag" output from each macro, then count which host is in which output. `macro 1` | eval source = "macro1" | append [search `macro 2` | eval source = "macro2"] | stats values(source) as source by host | where mvcount(source) < 2 AND source == "macro 1" Note I insert "search" command in the subsearch because I do not know how "macro 2" is constructed.  It may not need that, or the "search" command may ruin it.  The where command also uses a feature/side effect of SPL's equality comparator against multivalue fields.
That's interesting and seems as thought it may be a bug, but it may be that it's always worked that way. The solution is to mvjoin the data so it's single value then split it afterwards, e.g. ... |... See more...
That's interesting and seems as thought it may be a bug, but it may be that it's always worked that way. The solution is to mvjoin the data so it's single value then split it afterwards, e.g. ... | eval nameStatus=mvjoin(nameStatus,"##") | stats latest(nameStatus) as nameStatus | eval nameStatus=split(nameStatus, "##")
I have some search before, and after I extract fields (name, status) from json and mvzip it together, I got this table   _time name status nameStatus 2023-12-06 16:06:20 A B C UP D... See more...
I have some search before, and after I extract fields (name, status) from json and mvzip it together, I got this table   _time name status nameStatus 2023-12-06 16:06:20 A B C UP DOWN UP A,UP B,DOWN C,UP 2023-12-06 16:03:20 A B C UP UP UP A,UP B,UP C,UP 2023-12-06 16:00:20 A B C DOWN  UP UP A,DOWN B,UP C,UP   I want to get only the latest time of the records, so I pipe in the command  ...|stats latest(nameStatus). However, the result comes out only as A,UP   How can I fix this? Thank you!
You need to supply the owner in your call.  Just add "&owner=nobody" if it is a global lookup.
If there are events from 5 different IP addresses with the same attack name then a count by dest_ip and attack_name will produce 5 events with a count of 1.  Very likely not what you're looking for. ... See more...
If there are events from 5 different IP addresses with the same attack name then a count by dest_ip and attack_name will produce 5 events with a count of 1.  Very likely not what you're looking for.  Instead, count the number of IP address for each attack name and keep the results where the count is at least 5. index=ids | streamstats distinct_count(dest_ip) as count time_window=1h by attack_name | where count >= 5
Hello, Unfortunately this is giving me blank entries if the duration is under a day.   We figured it out, and this logic seems to be working:   | rex field=ELAPSED "((?<dd>\d*)-?)?((?<hh>\d+):?... See more...
Hello, Unfortunately this is giving me blank entries if the duration is under a day.   We figured it out, and this logic seems to be working:   | rex field=ELAPSED "((?<dd>\d*)-?)?((?<hh>\d+):?)?((?<mm>\d*):?)?(?<ss>\d+)$" | rex field=ELAPSED "((?<hhh>\d+):?)?((?<mmm>\d*):?)?(?<sss>\d+)$" | rex field=ELAPSED "((?<mmmm>\d*):?)?(?<ssss>\d+)$" | eval dd=If(isnotnull(hh),dd,0) | eval hhh=If('mm'='mmm',hhh,0) | eval mm=If('ss'='ssss',mmmm,0) | eval elapsed_secs = coalesce((if(isnotnull(dd),dd,0)*86400)+(if(isnotnull(hh),hh,0)*3600)+(if(isnotnull(mm),mm,0)*60)+if(isnotnull(ss),ss,0),0) | table ELAPSED elapsed_secs