All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I am running a search that is returning IP addresses that are being sent to a waf (web access firewall).  The waf requires all IP addresses to be written in CIDR notation.  I am just returnin... See more...
Hello, I am running a search that is returning IP addresses that are being sent to a waf (web access firewall).  The waf requires all IP addresses to be written in CIDR notation.  I am just returning single IPs ,so I have to add a /32 to each address that I submit. I am using the stats command, looking at different parameters and them counting by IP to provide the list I am submitting.  It seems like it should be straight forward using concatenation, but I haven't been able to get to a solution. eval  cidr_address=remoteIP + "/32" and varieties  of this approach(casting to string etc)  haven't worked.  Appreciate any help anyone can provide.  
As @PickleRick says, please ignore the generative AI response. collect is the documented command and it is what you should use when you want to save data to an index from an SPL command  https://do... See more...
As @PickleRick says, please ignore the generative AI response. collect is the documented command and it is what you should use when you want to save data to an index from an SPL command  https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/SearchReference/Collect summaryindex is the command that is still used internally by Splunk when you enable summary indexing from within a scheduled saved search and is effectively a synonym for collect. Don't use it - it is not a documented command. A "summary index" is perhaps a poor name for the concept - collect allows you to push anything you like to an index and there is nothing special about that index. Yes, the original intention is that it should contain "summarised data", but in practice a summary index is just an index. Note that the behaviour of _time when you collect data to an index is not well documented. It can change depending on what your data looks like and if your search is done from a scheduled report or not.  
If you only have a common field of _time, are you planning on visual matching and how are you looking to match things inside that minute? You can also use stats to 'join' data together, but perhaps ... See more...
If you only have a common field of _time, are you planning on visual matching and how are you looking to match things inside that minute? You can also use stats to 'join' data together, but perhaps you can expand on your use case with an example so we can give more useful help.
You don't need to use the IN construct when using subsearches, as the default returned from a subsearch is    field=A OR field=B or field=C...   so in practice you can just do   index=index2 [ ... See more...
You don't need to use the IN construct when using subsearches, as the default returned from a subsearch is    field=A OR field=B or field=C...   so in practice you can just do   index=index2 [ search index=index1 service IN (22, 53, 80, 8080) | table src_ip | rename src_ip as dev_ip ] | table dev_ip, OS_Type   however, how many src_ips are you likely to get back from this subsearch? If you get a large number, this may not perform well at all. In that case you will have to approach the problem in a different way, e.g.   index=index2 OR (index=index1 service IN (22, 53, 80, 8080)) ``` Creates a common dev_ip field which is treated as the common field between the two indexes ``` | eval dev_ip=if(index=index2, dev_ip, src_ip) ``` Now we need the data to be seen in both indexes, so count the indexes and collect the OS_Type values and split by that common dev_ip field ``` | stats dc(index) as indexes values(OS_Type) as OS_Type by dev_ip ``` And this just ensures we have seen the data from both places ``` | where indexes=2 | fields - indexes   A third way to attack this type of problem is using a lookup, where you maintain a list of the src_ips you want to match for in a lookup table. Which one you end up with, will depend on your data and its volume as they will have different performance characteristics. Hope this helps
If this is dashboard logic, where do your parameters come from, presumably they are tokens from somewhere. If so, you can just construct the token appropriately so you have | search $my_token$ whe... See more...
If this is dashboard logic, where do your parameters come from, presumably they are tokens from somewhere. If so, you can just construct the token appropriately so you have | search $my_token$ where my_token is constructed elsewhere. It is from a multiselect dropdown? If so, just use the settings in the multiselect option to set the token prefix/delimiter values     
I don't believe you can use colour names, such as Green and Yellow, you have to use hex codes or RGB, see here https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML#Color_pale... See more...
I don't believe you can use colour names, such as Green and Yellow, you have to use hex codes or RGB, see here https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML#Color_palette_types_and_options in your case it's interesting that you have yellow, as I would expect black if it does not understand colour names. Have you tried <colorPalette type="expression">if(value == "up","#00FF00", "#FFFF00")</colorPalette>
You can generally do this by concatenating the two data items into a single field for the split by clause of the timechart, i.e. ... | eval split=application.":".Condition | timechart span=1d count ... See more...
You can generally do this by concatenating the two data items into a single field for the split by clause of the timechart, i.e. ... | eval split=application.":".Condition | timechart span=1d count by split
Please post an example of your data containing topicid  
With syslog-ng we hit all kinds of limitations from the inability to support TCP, to the inability to write fast enough to disk, and therefore losing vast amounts of UDP data, struggling to have the ... See more...
With syslog-ng we hit all kinds of limitations from the inability to support TCP, to the inability to write fast enough to disk, and therefore losing vast amounts of UDP data, struggling to have the various F5 LBs to distribute the data evenly to the syslog servers behind the F5s... and all of this led to Kafka as a potential solution. Does anybody use Kafka effectively instead of syslog-ng? By the way, we did look at Splunk Connect for Syslog (SC4S) without much luck.
This is ok, but not helping me out as I do not want the dropdown as status as mentioned below. Even I applied below changes in my code but the color for stopped and running did not change.  below is ... See more...
This is ok, but not helping me out as I do not want the dropdown as status as mentioned below. Even I applied below changes in my code but the color for stopped and running did not change.  below is my code after a change but no color change happened, Please help me here asap. "viz_itT7cfIB": {             "type": "splunk.singlevalue",             "dataSources": {                 "primary": "ds_B6p8HEE0"             },             "title": "status",             "options": {                 "majorColor": "> majorValue | rangeValue(majorColorEditorConfig)",                 "backgroundColor": "transparent",                 "trendColor": "transparent"             },             "context": {                 "majorColorEditorConfig": [                     {                         "match": "Running",                         "value": "#118832"                     },                     {                         "match": "Stopped",                         "value": "#d41f1f"                     }                 ]             }         },           "ds_B6p8HEE0": {             "type": "ds.chain",             "options": {                 "enableSmartSources": true,                 "extend": "ds_JRxFx0K2",                 "query": "| eval status = if(OPEN_MODE=\"READ WRITE\",\"Running\",\"Stopped\") | stats latest(status)"             },             "name": "oracle status"         },
@gcusello, Thank you for your response.  No issue "yet", just trying to think this through.  If Workload Management's Admission Rules are set to not allow index=*, will this break any/all DMs that u... See more...
@gcusello, Thank you for your response.  No issue "yet", just trying to think this through.  If Workload Management's Admission Rules are set to not allow index=*, will this break any/all DMs that use the autoextractSearch (index=* OR index=_*) 
Hi @Fo, please try something like this: (index="first-app" sourcetype="first-app_application_log" "eventType=IMPORTANT_CREATE_EVENT") OR (index="second-app" sourcetype="second-app_application_log"... See more...
Hi @Fo, please try something like this: (index="first-app" sourcetype="first-app_application_log" "eventType=IMPORTANT_CREATE_EVENT") OR (index="second-app" sourcetype="second-app_application_log" "eventType=IMPORTANT_CANCEL_EVENT") | stats count(eval(index="first-app")) AS "first_app" count(eval(index="second-app")) AS "second_app" | eval diff="first_app"-"second_app" | table diff One additional hint: don't use minus char in field names because Splunk knows it as the minus sign, use underscore (_). Ciao. Giuseppe
I have two very simple searches and I need to be able to get the difference. This is insanely hard for something that is so simple.  search index="first-app" sourcetype="first-app_application_log" ... See more...
I have two very simple searches and I need to be able to get the difference. This is insanely hard for something that is so simple.  search index="first-app" sourcetype="first-app_application_log" AND "eventType=IMPORTANT_CREATE_EVENT" | stats count ^ this result is 150 search index="second-app" sourcetype="second-app_application_log" AND "eventType=IMPORTANT_CANCEL_EVENT" | stats count ^ this result is 5 I'm trying to figure out how to simply do the 150 - 5 to get 145. I've tried `set diff` `eval` a bunch of different ways with no luck. I'm going nuts. Any help would be greatly appreciated!
How to pull data from Splunk using search and build component in SUIT - Splunk UI Tools (@splunk/visualization/Area )
Affected are tstats/TERM/PREFIX and accelerated DM searches. This isn't limited to punycode domains; any value with continuous hyphens may be affected. Consider usernames, user-agents, URL paths and ... See more...
Affected are tstats/TERM/PREFIX and accelerated DM searches. This isn't limited to punycode domains; any value with continuous hyphens may be affected. Consider usernames, user-agents, URL paths and queries, file names, and file paths – the range of affected fields is extensive. The implications extend to premium apps like Enterprise Security, heavily reliant on accelerated DMs. Virtually every source and sourcetype could be impacted, including commonly used ones like firewall, endpoint, windows, proxy, etc. Here are a couple of examples to illustrate the issue: Working URL: hp--community.force.com Path: /tmp/folder--xyz/test-----123.txt, c:\Windows\Temp\test---abc\abc--123.dat Username: admin--haha User-Agent: Mozilla/5.0--findme
Affected are tstats/TERM/PREFIX and accelerated DM searches. This isn't limited to punycode domains; any domain with continuous hyphens may be affected. Consider usernames, user-agents, URL paths and... See more...
Affected are tstats/TERM/PREFIX and accelerated DM searches. This isn't limited to punycode domains; any domain with continuous hyphens may be affected. Consider usernames, user-agents, URL paths and queries, file names, and file paths – the range of affected fields is extensive. The implications extend to premium apps like Enterprise Security, heavily reliant on accelerated DMs. Virtually every source and sourcetype could be impacted, including commonly used ones like firewall, endpoint, windows, proxy, etc. Here are a couple of examples to illustrate the issue: Working URL: https://hp--community.force.com Path: /tmp/folder--xyz/test-----123.txt, c:\Windows\Temp\test---abc\abc--123.dat Username: admin--haha User-Agent: Mozilla/5.0--findme
Hi @hank72, sorry but what's the issue? in this way, you're sure that all the indexes are used to populate Data Models even if they aren't in the the default search path. Anyway, if you want to li... See more...
Hi @hank72, sorry but what's the issue? in this way, you're sure that all the indexes are used to populate Data Models even if they aren't in the the default search path. Anyway, if you want to limit the indexes used in one or all Data Models (I don't understand why!), you can modify the macro used in the contrains of each Data Model. Ciao. Giuseppe
Affected are tstats/TERM/PREFIX searches and accelerated DM searches. I haven't conducted a thorough check yet, but it seems that searches on accelerated DM may overlook fields with double dashes. Th... See more...
Affected are tstats/TERM/PREFIX searches and accelerated DM searches. I haven't conducted a thorough check yet, but it seems that searches on accelerated DM may overlook fields with double dashes. This isn't limited to punycode domains; any field value with continuous hyphens may be affected. Consider usernames, user-agents, URL paths and queries, file names, and file paths – the range of affected fields is extensive. The implications extend to premium apps like Enterprise Security, heavily reliant on accelerated DMs. Virtually every source and sourcetype could be impacted, including commonly used ones like firewall, endpoint, windows, proxy, etc. Here are a couple of examples to illustrate the issue: Working URL: https://hp--community.force.com Path: /tmp/back--door/test-----backdoor.txt, c:\Windows\Temp\back--door\test---backdoor.exe Username: admin--backdoor User-Agent: Mozilla/5.0--backdoor
You should list and categorise them and then do a summarizing stats. With a normal event search that could be done without appending but since you have the data in lookups you'd need to add a "looku... See more...
You should list and categorise them and then do a summarizing stats. With a normal event search that could be done without appending but since you have the data in lookups you'd need to add a "lookup identifier field" to the lookup contents in order to avoid the append command. Assuming you don't have it, it's something like this: | inputlookup abc.csv | eval source=abc.csv | table  server source | append   [ | inputlookup def.csv      | eval source=def.csv      | table server source ] This wil, give you a set of your servers along with an identifier which lookup each server came from. Now you can do | stats values(source) as sources by server And you'll get a multivalued field sources containing either of the source lookups or both of them sk you can use it to filter the data the way you want. Alternative approach is to not add string labels but numerical ids (like 1 and 2) and then do sum() unstead of values() - then you'd have a field with value 1, 2 or 3 depending on which lookup the server was originally in. One caveat to the initial building of the list - it uses the append command which has its limitations for run time (which will not be an issue here) and the number of returned results (which might). If you had the field I mentioned at the beginning identifying the lookup, instead of using the append command you could just use another inputlookup command with an append=t option.
+1 on that. The impact is limited to where you use the custom segmenter (you set it for specific props stanza).