All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That did it. Much appreciated. 
Hi @Govind.samy,   I wanted to share these AppD Docs pages, as they might share some insight. https://docs.appdynamics.com/appd/onprem/latest/en/end-user-monitoring/eum-accounts-license... See more...
Hi @Govind.samy,   I wanted to share these AppD Docs pages, as they might share some insight. https://docs.appdynamics.com/appd/onprem/latest/en/end-user-monitoring/eum-accounts-licenses-and-app-keys https://docs.appdynamics.com/appd/onprem/latest/en/appdynamics-licensing/license-entitlements-and-restrictions
Technically you could do the following to fix the symptoms | where time >= tonmumber(replace($max_value$, ",", "")) - 0.001 but you are better off finding the source of the token, as @PickleRick sa... See more...
Technically you could do the following to fix the symptoms | where time >= tonmumber(replace($max_value$, ",", "")) - 0.001 but you are better off finding the source of the token, as @PickleRick says, and make sure it contains something suitable to perform calculations with if that's how you intend to use it.  
You first mention colouring the column, then the row - if you want to colour the column then you can do it if your importer is a single value field - from your search you are doing stats values().. a... See more...
You first mention colouring the column, then the row - if you want to colour the column then you can do it if your importer is a single value field - from your search you are doing stats values().. as importer, but the principle of colouring a column (not row) based on its relation to another field is to make the column you want to colour a multivalue field by appending the indicator, e.g. | eval importer=mvappend(importer, importer_in_csv) and to then limit the number of values shown for that field to 1 with some CSS, e.g. <html depends="$hidden$"> <style> #coloured_cell table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> and then to use a format statement in the table definition <format type="color" field="importer"> <colorPalette type="expression">case(mvindex(value, 1) == "0", "#FF0000", mvindex(value, 1) == "1", "#00FF00")</colorPalette> </format> However, it's not clear from your search what your data actually looks like as the join subsearch is not terminated, so it's not clear where it ends and you don't appear to have any common fields to join with.
Typo correction: Hello Giuseppe, Noticed it's been over 8 years since you posted your question, but came across this post while searching on how to make a text box empty by default......same as you... See more...
Typo correction: Hello Giuseppe, Noticed it's been over 8 years since you posted your question, but came across this post while searching on how to make a text box empty by default......same as you were looking to do. Was working on a dashboard today, and thought what character is not ever in event data, and is not a character used by SPL for any reason.  The answer was the:  ~ This worked for me, like a charm, in a dashboard text box:    <initialValue>~</initialValue> <default>~</default> Best regards,      Dennis
I agree, that you would expect it to return the entire MV field, not just the first value. I suspect this may be a bug that has existed forever, but one which has a workaround. If you have a suppor... See more...
I agree, that you would expect it to return the entire MV field, not just the first value. I suspect this may be a bug that has existed forever, but one which has a workaround. If you have a support entitlement with Splunk, you could raise that as a bug and see what they say This is a simple working example from your data that exhibits the problem | makeresults format=csv data="_time,name,status,nameStatus 2023-12-06 16:06:20,A:B:C,UP:DOWN:UP,A;UP:B;DOWN:C;UP 2023-12-06 16:03:20,A:B:C,UP:UP:UP,A;UP:B;UP:C;UP 2023-12-06 16:00:20,A:B:C,DOWN:UP:UP,A;DOWN:B;UP:C;UP" | foreach * [ eval <<FIELD>>=split(<<FIELD>>, ":") ] ```| eval nameStatus=mvjoin(nameStatus,"##")``` | stats latest(nameStatus) as nameStatus ```| eval nameStatus=split(nameStatus, "##")```
@Bo3432 The where clause takes an eval statement and in eval statements you need to wrap 'odd' field names with single quotes. In your case properties.userAgent contains a full-stop, so you need to u... See more...
@Bo3432 The where clause takes an eval statement and in eval statements you need to wrap 'odd' field names with single quotes. In your case properties.userAgent contains a full-stop, so you need to use | where isnotnull('properties.userAgent') AND 'properties.userAgent'!=""
That's actually a good (and working) idea! Thank you very much! I don't know why latest didn't work either cause technically it should just check with the time and return the whole thing, right? B... See more...
That's actually a good (and working) idea! Thank you very much! I don't know why latest didn't work either cause technically it should just check with the time and return the whole thing, right? But yes, it works now, thank you very much!  
First, let me clarify that this problem is solvable as stated.  But you may want to reconsider how "macro 1" and "macro 2" is structured to make this easier.  You may also want to structure a differe... See more...
First, let me clarify that this problem is solvable as stated.  But you may want to reconsider how "macro 1" and "macro 2" is structured to make this easier.  You may also want to structure a different search to make this function more efficient. Back to the stated problem.  The idea is to "tag" output from each macro, then count which host is in which output. `macro 1` | eval source = "macro1" | append [search `macro 2` | eval source = "macro2"] | stats values(source) as source by host | where mvcount(source) < 2 AND source == "macro 1" Note I insert "search" command in the subsearch because I do not know how "macro 2" is constructed.  It may not need that, or the "search" command may ruin it.  The where command also uses a feature/side effect of SPL's equality comparator against multivalue fields.
That's interesting and seems as thought it may be a bug, but it may be that it's always worked that way. The solution is to mvjoin the data so it's single value then split it afterwards, e.g. ... |... See more...
That's interesting and seems as thought it may be a bug, but it may be that it's always worked that way. The solution is to mvjoin the data so it's single value then split it afterwards, e.g. ... | eval nameStatus=mvjoin(nameStatus,"##") | stats latest(nameStatus) as nameStatus | eval nameStatus=split(nameStatus, "##")
I have some search before, and after I extract fields (name, status) from json and mvzip it together, I got this table   _time name status nameStatus 2023-12-06 16:06:20 A B C UP D... See more...
I have some search before, and after I extract fields (name, status) from json and mvzip it together, I got this table   _time name status nameStatus 2023-12-06 16:06:20 A B C UP DOWN UP A,UP B,DOWN C,UP 2023-12-06 16:03:20 A B C UP UP UP A,UP B,UP C,UP 2023-12-06 16:00:20 A B C DOWN  UP UP A,DOWN B,UP C,UP   I want to get only the latest time of the records, so I pipe in the command  ...|stats latest(nameStatus). However, the result comes out only as A,UP   How can I fix this? Thank you!
You need to supply the owner in your call.  Just add "&owner=nobody" if it is a global lookup.
If there are events from 5 different IP addresses with the same attack name then a count by dest_ip and attack_name will produce 5 events with a count of 1.  Very likely not what you're looking for. ... See more...
If there are events from 5 different IP addresses with the same attack name then a count by dest_ip and attack_name will produce 5 events with a count of 1.  Very likely not what you're looking for.  Instead, count the number of IP address for each attack name and keep the results where the count is at least 5. index=ids | streamstats distinct_count(dest_ip) as count time_window=1h by attack_name | where count >= 5
Hello, Unfortunately this is giving me blank entries if the duration is under a day.   We figured it out, and this logic seems to be working:   | rex field=ELAPSED "((?<dd>\d*)-?)?((?<hh>\d+):?... See more...
Hello, Unfortunately this is giving me blank entries if the duration is under a day.   We figured it out, and this logic seems to be working:   | rex field=ELAPSED "((?<dd>\d*)-?)?((?<hh>\d+):?)?((?<mm>\d*):?)?(?<ss>\d+)$" | rex field=ELAPSED "((?<hhh>\d+):?)?((?<mmm>\d*):?)?(?<sss>\d+)$" | rex field=ELAPSED "((?<mmmm>\d*):?)?(?<ssss>\d+)$" | eval dd=If(isnotnull(hh),dd,0) | eval hhh=If('mm'='mmm',hhh,0) | eval mm=If('ss'='ssss',mmmm,0) | eval elapsed_secs = coalesce((if(isnotnull(dd),dd,0)*86400)+(if(isnotnull(hh),hh,0)*3600)+(if(isnotnull(mm),mm,0)*60)+if(isnotnull(ss),ss,0),0) | table ELAPSED elapsed_secs  
Yes, it can be used with a table and all other visualizations. When you say "it is giving no results" does that mean the where command is not filtering as expected or you are getting nothing at all ... See more...
Yes, it can be used with a table and all other visualizations. When you say "it is giving no results" does that mean the where command is not filtering as expected or you are getting nothing at all from the query?  If the former, then it's possible the userAgent field is all spaces so the filter should be modified to handle that.  For the latter, try renaming the fields to eliminate dots. index=azure sourcetype="azure:monitor:aad" action=* | rename properties.* as * | where isnotnull(userAgent) AND userAgent!="" |table _time user deviceDetail.displayName userAgent action |sort -_time  
That worked. Thanks.
I'm going crazy trying to troubleshoot this error with eventlog. I'm only using one mvfile replacement type and it is not working. The SA-Eventgen logs tell me this:       time="2023-12-06T19:42:... See more...
I'm going crazy trying to troubleshoot this error with eventlog. I'm only using one mvfile replacement type and it is not working. The SA-Eventgen logs tell me this:       time="2023-12-06T19:42:32Z" level=warning msg="No srcField provided for mvfile replacement: "         In my $SPLUNK_HOME/etc/apps/<app>/default/eventgen.conf file, I have:       ... token.2.token = "(\$customer_name\$)" token.2.replacementType = mvfile token.2.replacement = $SPLUNK_HOME/etc/apps/eventgen_yogaStudio/samples/customer_info.txt:1 ...         My customer_info.txt:1 file contains:       JoeSmith,43,Wisconsin,Pisces JaneDoe,25,Kentucky,Gemini ...         I'm getting JSON-formatted events but for customer_name, it's just blank:       { membership: gold customer_name: item: 30-day-pass quantity: 4 ts: 1701892130 }         I've tried the following sample file names: customer_info.txt customer_info.sample customer_info.csv Nothing seems to work. I'm going crazy!
Can this be used with a table?  This is my command but it is giving no results.  index=azure sourcetype="azure:monitor:aad" action=* | where isnotnull(properties.userAgent) AND properties.userAge... See more...
Can this be used with a table?  This is my command but it is giving no results.  index=azure sourcetype="azure:monitor:aad" action=* | where isnotnull(properties.userAgent) AND properties.userAgent!="" |table _time user properties.deviceDetail.displayName properties.userAgent action |sort -_time
Hi all, I published my new version of app : https://splunkbase.splunk.com/app/7087, version 1.2.0 (invisible for now because below issue) When I tried to install it on my cloud instance through spl... See more...
Hi all, I published my new version of app : https://splunkbase.splunk.com/app/7087, version 1.2.0 (invisible for now because below issue) When I tried to install it on my cloud instance through splunkbase, I face below errors X509 certificate (CN=splunkbase.splunk.com,O=Splunk Inc.,L=San Francisco,ST=California,C=US) common name (splunkbase.splunk.com) did not match any allowed names (apps.splunk.com,cdn.apps.splunk.com) That's werid because I did not change anything about certification or the package process... Just fixed one more bug in the app about data missing and bump the app version. Tried other apps on Splunkbase and the old version of my app, they are all works fine... Anyone has idea what happened to my 1.2.0 app? Your help will be appreciated very much!
Does the account running Splunk have permission to delete the files?  Are there any messages in splunkd.log about the files?