All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Check your effective config with btool to see if you've successfully overriden the settings. But you may also be hitting some different issue.
The thing I could suggest is enabling debug and trying to look into forwarder's logs but that's a long shot and I have really no concrete advice what to look for. Kinda like "exploratory surgery".
Depending on what you mean by webhook - as @_JP already pointed out, Http Event Collector is one of the main and most often used ways of input. Also you can call Splunk using REST to perform various ... See more...
Depending on what you mean by webhook - as @_JP already pointed out, Http Event Collector is one of the main and most often used ways of input. Also you can call Splunk using REST to perform various actions - see https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTlist (actually pushing events to HEC is also using one of those REST endpoints).
Also sometimes change on the source (like version upgrade) can result in change in log format/level of detail even without changing loglevel (i.e. new software versions logs additional fields in the ... See more...
Also sometimes change on the source (like version upgrade) can result in change in log format/level of detail even without changing loglevel (i.e. new software versions logs additional fields in the events).
Sorting values in a multi-value field of IP addresses involves arranging the IPs in a specific order. Here's a simple way to do it: Separate IP Addresses: If your multi-value field c... See more...
Sorting values in a multi-value field of IP addresses involves arranging the IPs in a specific order. Here's a simple way to do it: Separate IP Addresses: If your multi-value field contains multiple IPs in a single string, separate them into individual values. Convert IPs to Numeric Format: Convert each IP address to its numeric equivalent. You can do this by treating each part of the IP as a number and combining them. Sort Numeric Values: Sort the numeric representations of the IPs in ascending or descending order, depending on your preference. Convert Back to IP Format: Once sorted, convert the numeric values back to IP address format. Example: Suppose you have IP addresses like "192.168.1.2," "10.0.0.1," and "172.16.0.5." Separate: ["192.168.1.2", "10.0.0.1", "172.16.0.5"] Convert: [3232235778, 167772161, 2886729733] Sort: [167772161, 2886729733, 3232235778] Convert Back: ["10.0.0.1", "172.16.0.5", "192.168.1.2"] You can use programming languages like Python or JavaScript for this task. Always consider the specific requirements of your project and the tools available for your development environment.
Well, you're trying to force Splunk to do something it's not designed to do. You can have real-time search with Splunk but the real-time searches are not a very good solution and there are very limi... See more...
Well, you're trying to force Splunk to do something it's not designed to do. You can have real-time search with Splunk but the real-time searches are not a very good solution and there are very limited use cases when their use is reasonable. They have their limitations and they hog up resources (each real-time search blocks a single CPU _on every participating indexer_). You can use a report with a minute schedule (but bear in mind that depending on the load, a search can be delayed or skipped altogether!) or create a dashboard with a relatively frequent refresh period. But all those walkarounds are fairly "heavy" for your environment since you're spawning a new search often (and spawning a search is a relatively complicated process). Splunk Enterprise is not really a real-time monitoring solution (even though it has some functionality that does real-time stuff) so forcing it to do something like that might end in disappointment.
Anyway, with big lookups you should rather use KVstore instead of csv files since csv searches are linear and thus not very effective as the file grows.
I'm testing the code you posted and indeed something is very strange.  It is like base0 is not configured correctly.  Any thing chained to it cannot return anything unless it is a semantic noop like ... See more...
I'm testing the code you posted and indeed something is very strange.  It is like base0 is not configured correctly.  Any thing chained to it cannot return anything unless it is a semantic noop like "| search *". I also see the method you experimented in that code using base1.  For now you can use that as a workaround.  I will continue to see what's wrong with base0 search.
What's your actual config regarding those overrides? I have a feeling that you should rather get a decent syslog receiver in front of your HF
Think I have the solution Given the base search is in fact not really a search, but a reference to and index: index=_internal It seems that as no pipeline exists, this seems to break the chain, as... See more...
Think I have the solution Given the base search is in fact not really a search, but a reference to and index: index=_internal It seems that as no pipeline exists, this seems to break the chain, as perhaps arguably there is no real search to begin with.   By changing my base (base2) search as follows (which combines base0 and base0chain1a) this creates a pipeline index=_internal | search useTypeahead=true This then allows be to extend it further with an additional chained search successfully: | stats count This provides me with the expected behaviour, shown in the bottom row of graphs.
Hi at all, I have a problem similar to one already solved by @PickleRick   in a previous question: I have a flow from a concentrator (another one that the previous) that sends many logs to my HF by... See more...
Hi at all, I have a problem similar to one already solved by @PickleRick   in a previous question: I have a flow from a concentrator (another one that the previous) that sends many logs to my HF by syslog. It's a mixture or many kinds of logs (Juniper, Infoblox, Fortinet, etc...). I have to override the sourcetype value, assigning the correct one, but the issue is that, after the related add-ons has to do a new override of the sourcetype value and this second override doesn't work. I'd like to have an hint about the two solutions I thought or a different one: if it's possibe, how to make a double sourcetype overriding? do you think that it's better to modify the Add-Ons avoiding the first override? Thank you for your help. Ciao. Giuseppe
Hi @jhooper33, no, I asked to share the search that caused the message "regex too long", not the lookup, to understand what could be the issue on the regex. I hint to explore the use of summary ind... See more...
Hi @jhooper33, no, I asked to share the search that caused the message "regex too long", not the lookup, to understand what could be the issue on the regex. I hint to explore the use of summary indexes or a Data Model instead a lookup if you have too many rows. Ciao. Giuseppe  
Hi @gcusello, Thank you for the response. I did not create this csv only working with it from past co workers. So I have been spending a bit of time trying to fix this issue. I've tried to find any ... See more...
Hi @gcusello, Thank you for the response. I did not create this csv only working with it from past co workers. So I have been spending a bit of time trying to fix this issue. I've tried to find any regex issues with this file but it's a little difficult with how many lines there are. Are you asking if I can share the lookup file? And as far as using a summary index, this is something I haven't tried yet.
It Works, Thank You @richgalloway 
Hi @Ganesh1 , there are many solution to this request in Community, you have to create a lookup (called e.g. perimeter.csv) containing the 40 hosts to monitor (at least one column "host") and then ... See more...
Hi @Ganesh1 , there are many solution to this request in Community, you have to create a lookup (called e.g. perimeter.csv) containing the 40 hosts to monitor (at least one column "host") and then run every hour something like this: | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0  Ciao. Giuseppe
Hi @jhooper33, the error messge seems to be related to the regex contained in the search, it doesn't seem to be related to the lookup, could you share it? Anyway more than 50,000 lines is a max lim... See more...
Hi @jhooper33, the error messge seems to be related to the regex contained in the search, it doesn't seem to be related to the lookup, could you share it? Anyway more than 50,000 lines is a max limit for a lookup , use a summary index instead a lookup. Ciao. Giuseppe
Hi @madhav_dholakia, your process is ok, only one questions: if the report is refreshed every minute, whay do you refresh panel every 15 seconds? iy's unuseful! let me know if I can help you more, ... See more...
Hi @madhav_dholakia, your process is ok, only one questions: if the report is refreshed every minute, whay do you refresh panel every 15 seconds? iy's unuseful! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Thats_my_usrnme , you have to use Deployer to send the first version of an app to SHs. Then every configuration update (made by GUI) is replicated to all the other Cluster members, but not to t... See more...
Hi @Thats_my_usrnme , you have to use Deployer to send the first version of an app to SHs. Then every configuration update (made by GUI) is replicated to all the other Cluster members, but not to the Deployer. If you have to make an update to the app, you have still to do it by Deployer but, anyway, the updated made on the members by GUI remain because they are in the local folders. If you try to make an update directly to a conf file it isn't replicated to the other members. You can find more details at https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/AboutSHC and https://www.youtube.com/watch?v=VPa3EjqD8Q4 Ciao. Giuseppe
Hello team, I have distrubuted environment and I got some data with syslog.  We  are create some regex for field extraction on captain SH not on Deployer (possibly this part should be search time fi... See more...
Hello team, I have distrubuted environment and I got some data with syslog.  We  are create some regex for field extraction on captain SH not on Deployer (possibly this part should be search time field extraction) and everything works but I dont get it. I know captain SH replace the conf file for other SH. İs it process automatic or not? I cant see same config on deployer, I think its normal because we dont have app for this logsource just we use custom parse. İf I had a app, I can use deployer but in this case wondering custom process. Somebody can explain for me these process? why we didnt use Deployment server for HF parsing?(maybe its other way I'm not clear)  
Hi Team/Community, I'm having an issue with a lookup file. I have a csv with two columns, 1st is named ioc and second is named note. This csv is an intel file created for searching for any visits to... See more...
Hi Team/Community, I'm having an issue with a lookup file. I have a csv with two columns, 1st is named ioc and second is named note. This csv is an intel file created for searching for any visits to malicious urls for users. The total number of lines for this csv is 66,317. The encoding for this csv is ascii. We keep having an issue with this search where when running it Splunk will give an error message of Regex: regular expression is too large. This error isn't consistent and it this particular alert doesn't  seem to trigger much for many of our customers. We also have a similar separate alerts built for looking for domains and IPs. Those work much better than this url csv. I would like help in seeing with the issue is with the regex being too large. Is the number of lines causing a major issue with this file running properly? Any help would be greatly appreciated as I think this search is not running efficiently.