All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I don't believe you can use colour names, such as Green and Yellow, you have to use hex codes or RGB, see here https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML#Color_pale... See more...
I don't believe you can use colour names, such as Green and Yellow, you have to use hex codes or RGB, see here https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML#Color_palette_types_and_options in your case it's interesting that you have yellow, as I would expect black if it does not understand colour names. Have you tried <colorPalette type="expression">if(value == "up","#00FF00", "#FFFF00")</colorPalette>
You can generally do this by concatenating the two data items into a single field for the split by clause of the timechart, i.e. ... | eval split=application.":".Condition | timechart span=1d count ... See more...
You can generally do this by concatenating the two data items into a single field for the split by clause of the timechart, i.e. ... | eval split=application.":".Condition | timechart span=1d count by split
Please post an example of your data containing topicid  
With syslog-ng we hit all kinds of limitations from the inability to support TCP, to the inability to write fast enough to disk, and therefore losing vast amounts of UDP data, struggling to have the ... See more...
With syslog-ng we hit all kinds of limitations from the inability to support TCP, to the inability to write fast enough to disk, and therefore losing vast amounts of UDP data, struggling to have the various F5 LBs to distribute the data evenly to the syslog servers behind the F5s... and all of this led to Kafka as a potential solution. Does anybody use Kafka effectively instead of syslog-ng? By the way, we did look at Splunk Connect for Syslog (SC4S) without much luck.
This is ok, but not helping me out as I do not want the dropdown as status as mentioned below. Even I applied below changes in my code but the color for stopped and running did not change.  below is ... See more...
This is ok, but not helping me out as I do not want the dropdown as status as mentioned below. Even I applied below changes in my code but the color for stopped and running did not change.  below is my code after a change but no color change happened, Please help me here asap. "viz_itT7cfIB": {             "type": "splunk.singlevalue",             "dataSources": {                 "primary": "ds_B6p8HEE0"             },             "title": "status",             "options": {                 "majorColor": "> majorValue | rangeValue(majorColorEditorConfig)",                 "backgroundColor": "transparent",                 "trendColor": "transparent"             },             "context": {                 "majorColorEditorConfig": [                     {                         "match": "Running",                         "value": "#118832"                     },                     {                         "match": "Stopped",                         "value": "#d41f1f"                     }                 ]             }         },           "ds_B6p8HEE0": {             "type": "ds.chain",             "options": {                 "enableSmartSources": true,                 "extend": "ds_JRxFx0K2",                 "query": "| eval status = if(OPEN_MODE=\"READ WRITE\",\"Running\",\"Stopped\") | stats latest(status)"             },             "name": "oracle status"         },
@gcusello, Thank you for your response.  No issue "yet", just trying to think this through.  If Workload Management's Admission Rules are set to not allow index=*, will this break any/all DMs that u... See more...
@gcusello, Thank you for your response.  No issue "yet", just trying to think this through.  If Workload Management's Admission Rules are set to not allow index=*, will this break any/all DMs that use the autoextractSearch (index=* OR index=_*) 
Hi @Fo, please try something like this: (index="first-app" sourcetype="first-app_application_log" "eventType=IMPORTANT_CREATE_EVENT") OR (index="second-app" sourcetype="second-app_application_log"... See more...
Hi @Fo, please try something like this: (index="first-app" sourcetype="first-app_application_log" "eventType=IMPORTANT_CREATE_EVENT") OR (index="second-app" sourcetype="second-app_application_log" "eventType=IMPORTANT_CANCEL_EVENT") | stats count(eval(index="first-app")) AS "first_app" count(eval(index="second-app")) AS "second_app" | eval diff="first_app"-"second_app" | table diff One additional hint: don't use minus char in field names because Splunk knows it as the minus sign, use underscore (_). Ciao. Giuseppe
I have two very simple searches and I need to be able to get the difference. This is insanely hard for something that is so simple.  search index="first-app" sourcetype="first-app_application_log" ... See more...
I have two very simple searches and I need to be able to get the difference. This is insanely hard for something that is so simple.  search index="first-app" sourcetype="first-app_application_log" AND "eventType=IMPORTANT_CREATE_EVENT" | stats count ^ this result is 150 search index="second-app" sourcetype="second-app_application_log" AND "eventType=IMPORTANT_CANCEL_EVENT" | stats count ^ this result is 5 I'm trying to figure out how to simply do the 150 - 5 to get 145. I've tried `set diff` `eval` a bunch of different ways with no luck. I'm going nuts. Any help would be greatly appreciated!
How to pull data from Splunk using search and build component in SUIT - Splunk UI Tools (@splunk/visualization/Area )
Affected are tstats/TERM/PREFIX and accelerated DM searches. This isn't limited to punycode domains; any value with continuous hyphens may be affected. Consider usernames, user-agents, URL paths and ... See more...
Affected are tstats/TERM/PREFIX and accelerated DM searches. This isn't limited to punycode domains; any value with continuous hyphens may be affected. Consider usernames, user-agents, URL paths and queries, file names, and file paths – the range of affected fields is extensive. The implications extend to premium apps like Enterprise Security, heavily reliant on accelerated DMs. Virtually every source and sourcetype could be impacted, including commonly used ones like firewall, endpoint, windows, proxy, etc. Here are a couple of examples to illustrate the issue: Working URL: hp--community.force.com Path: /tmp/folder--xyz/test-----123.txt, c:\Windows\Temp\test---abc\abc--123.dat Username: admin--haha User-Agent: Mozilla/5.0--findme
Affected are tstats/TERM/PREFIX and accelerated DM searches. This isn't limited to punycode domains; any domain with continuous hyphens may be affected. Consider usernames, user-agents, URL paths and... See more...
Affected are tstats/TERM/PREFIX and accelerated DM searches. This isn't limited to punycode domains; any domain with continuous hyphens may be affected. Consider usernames, user-agents, URL paths and queries, file names, and file paths – the range of affected fields is extensive. The implications extend to premium apps like Enterprise Security, heavily reliant on accelerated DMs. Virtually every source and sourcetype could be impacted, including commonly used ones like firewall, endpoint, windows, proxy, etc. Here are a couple of examples to illustrate the issue: Working URL: https://hp--community.force.com Path: /tmp/folder--xyz/test-----123.txt, c:\Windows\Temp\test---abc\abc--123.dat Username: admin--haha User-Agent: Mozilla/5.0--findme
Hi @hank72, sorry but what's the issue? in this way, you're sure that all the indexes are used to populate Data Models even if they aren't in the the default search path. Anyway, if you want to li... See more...
Hi @hank72, sorry but what's the issue? in this way, you're sure that all the indexes are used to populate Data Models even if they aren't in the the default search path. Anyway, if you want to limit the indexes used in one or all Data Models (I don't understand why!), you can modify the macro used in the contrains of each Data Model. Ciao. Giuseppe
Affected are tstats/TERM/PREFIX searches and accelerated DM searches. I haven't conducted a thorough check yet, but it seems that searches on accelerated DM may overlook fields with double dashes. Th... See more...
Affected are tstats/TERM/PREFIX searches and accelerated DM searches. I haven't conducted a thorough check yet, but it seems that searches on accelerated DM may overlook fields with double dashes. This isn't limited to punycode domains; any field value with continuous hyphens may be affected. Consider usernames, user-agents, URL paths and queries, file names, and file paths – the range of affected fields is extensive. The implications extend to premium apps like Enterprise Security, heavily reliant on accelerated DMs. Virtually every source and sourcetype could be impacted, including commonly used ones like firewall, endpoint, windows, proxy, etc. Here are a couple of examples to illustrate the issue: Working URL: https://hp--community.force.com Path: /tmp/back--door/test-----backdoor.txt, c:\Windows\Temp\back--door\test---backdoor.exe Username: admin--backdoor User-Agent: Mozilla/5.0--backdoor
You should list and categorise them and then do a summarizing stats. With a normal event search that could be done without appending but since you have the data in lookups you'd need to add a "looku... See more...
You should list and categorise them and then do a summarizing stats. With a normal event search that could be done without appending but since you have the data in lookups you'd need to add a "lookup identifier field" to the lookup contents in order to avoid the append command. Assuming you don't have it, it's something like this: | inputlookup abc.csv | eval source=abc.csv | table  server source | append   [ | inputlookup def.csv      | eval source=def.csv      | table server source ] This wil, give you a set of your servers along with an identifier which lookup each server came from. Now you can do | stats values(source) as sources by server And you'll get a multivalued field sources containing either of the source lookups or both of them sk you can use it to filter the data the way you want. Alternative approach is to not add string labels but numerical ids (like 1 and 2) and then do sum() unstead of values() - then you'd have a field with value 1, 2 or 3 depending on which lookup the server was originally in. One caveat to the initial building of the list - it uses the append command which has its limitations for run time (which will not be an issue here) and the number of returned results (which might). If you had the field I mentioned at the beginning identifying the lookup, instead of using the append command you could just use another inputlookup command with an append=t option.
+1 on that. The impact is limited to where you use the custom segmenter (you set it for specific props stanza).
Yes. Verify your sources and their config in Splunk. Without more information we can't tell you anything more than that.
Hi Experts,  I need to compare server lists from two different csv lookups and create a flag based on the comparison results,  I have two lookups abc.csv - contains list of servers being monito... See more...
Hi Experts,  I need to compare server lists from two different csv lookups and create a flag based on the comparison results,  I have two lookups abc.csv - contains list of servers being monitored in dashboard def.csv -contains list of servers from another source   I need to identify servers present in both abc.csv and def.csv not found in dashboard (i.e abc.csv) and not found in def.csv How to compare it and create a flag? Any guidance or example queries would be greatly appreciated. Thank You
Hi @Splunk-Star, After using table or stats commands Splunk shows only outputs of these commands. This does not mean they are not extracted. If you need to access other fields, add them to the table... See more...
Hi @Splunk-Star, After using table or stats commands Splunk shows only outputs of these commands. This does not mean they are not extracted. If you need to access other fields, add them to the table command.   
Please let me know the correct data extraction?   index=* "Unknown message for StatusConsumer" topicId marshall | rex field=_raw "\"topicId\":\"(?<topicId>\d+)\"" | table topicId   Datas are not ... See more...
Please let me know the correct data extraction?   index=* "Unknown message for StatusConsumer" topicId marshall | rex field=_raw "\"topicId\":\"(?<topicId>\d+)\"" | table topicId   Datas are not getting parsed after giving table name on splunk query.
Hi community, When using datamodels, is it possible to remove/exclude the portion of the autoextractSearch: | search (index=* OR index=_*)