All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Anyway, with big lookups you should rather use KVstore instead of csv files since csv searches are linear and thus not very effective as the file grows.
I'm testing the code you posted and indeed something is very strange.  It is like base0 is not configured correctly.  Any thing chained to it cannot return anything unless it is a semantic noop like ... See more...
I'm testing the code you posted and indeed something is very strange.  It is like base0 is not configured correctly.  Any thing chained to it cannot return anything unless it is a semantic noop like "| search *". I also see the method you experimented in that code using base1.  For now you can use that as a workaround.  I will continue to see what's wrong with base0 search.
What's your actual config regarding those overrides? I have a feeling that you should rather get a decent syslog receiver in front of your HF
Think I have the solution Given the base search is in fact not really a search, but a reference to and index: index=_internal It seems that as no pipeline exists, this seems to break the chain, as... See more...
Think I have the solution Given the base search is in fact not really a search, but a reference to and index: index=_internal It seems that as no pipeline exists, this seems to break the chain, as perhaps arguably there is no real search to begin with.   By changing my base (base2) search as follows (which combines base0 and base0chain1a) this creates a pipeline index=_internal | search useTypeahead=true This then allows be to extend it further with an additional chained search successfully: | stats count This provides me with the expected behaviour, shown in the bottom row of graphs.
Hi at all, I have a problem similar to one already solved by @PickleRick   in a previous question: I have a flow from a concentrator (another one that the previous) that sends many logs to my HF by... See more...
Hi at all, I have a problem similar to one already solved by @PickleRick   in a previous question: I have a flow from a concentrator (another one that the previous) that sends many logs to my HF by syslog. It's a mixture or many kinds of logs (Juniper, Infoblox, Fortinet, etc...). I have to override the sourcetype value, assigning the correct one, but the issue is that, after the related add-ons has to do a new override of the sourcetype value and this second override doesn't work. I'd like to have an hint about the two solutions I thought or a different one: if it's possibe, how to make a double sourcetype overriding? do you think that it's better to modify the Add-Ons avoiding the first override? Thank you for your help. Ciao. Giuseppe
Hi @jhooper33, no, I asked to share the search that caused the message "regex too long", not the lookup, to understand what could be the issue on the regex. I hint to explore the use of summary ind... See more...
Hi @jhooper33, no, I asked to share the search that caused the message "regex too long", not the lookup, to understand what could be the issue on the regex. I hint to explore the use of summary indexes or a Data Model instead a lookup if you have too many rows. Ciao. Giuseppe  
Hi @gcusello, Thank you for the response. I did not create this csv only working with it from past co workers. So I have been spending a bit of time trying to fix this issue. I've tried to find any ... See more...
Hi @gcusello, Thank you for the response. I did not create this csv only working with it from past co workers. So I have been spending a bit of time trying to fix this issue. I've tried to find any regex issues with this file but it's a little difficult with how many lines there are. Are you asking if I can share the lookup file? And as far as using a summary index, this is something I haven't tried yet.
It Works, Thank You @richgalloway 
Hi @Ganesh1 , there are many solution to this request in Community, you have to create a lookup (called e.g. perimeter.csv) containing the 40 hosts to monitor (at least one column "host") and then ... See more...
Hi @Ganesh1 , there are many solution to this request in Community, you have to create a lookup (called e.g. perimeter.csv) containing the 40 hosts to monitor (at least one column "host") and then run every hour something like this: | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0  Ciao. Giuseppe
Hi @jhooper33, the error messge seems to be related to the regex contained in the search, it doesn't seem to be related to the lookup, could you share it? Anyway more than 50,000 lines is a max lim... See more...
Hi @jhooper33, the error messge seems to be related to the regex contained in the search, it doesn't seem to be related to the lookup, could you share it? Anyway more than 50,000 lines is a max limit for a lookup , use a summary index instead a lookup. Ciao. Giuseppe
Hi @madhav_dholakia, your process is ok, only one questions: if the report is refreshed every minute, whay do you refresh panel every 15 seconds? iy's unuseful! let me know if I can help you more, ... See more...
Hi @madhav_dholakia, your process is ok, only one questions: if the report is refreshed every minute, whay do you refresh panel every 15 seconds? iy's unuseful! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Thats_my_usrnme , you have to use Deployer to send the first version of an app to SHs. Then every configuration update (made by GUI) is replicated to all the other Cluster members, but not to t... See more...
Hi @Thats_my_usrnme , you have to use Deployer to send the first version of an app to SHs. Then every configuration update (made by GUI) is replicated to all the other Cluster members, but not to the Deployer. If you have to make an update to the app, you have still to do it by Deployer but, anyway, the updated made on the members by GUI remain because they are in the local folders. If you try to make an update directly to a conf file it isn't replicated to the other members. You can find more details at https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/AboutSHC and https://www.youtube.com/watch?v=VPa3EjqD8Q4 Ciao. Giuseppe
Hello team, I have distrubuted environment and I got some data with syslog.  We  are create some regex for field extraction on captain SH not on Deployer (possibly this part should be search time fi... See more...
Hello team, I have distrubuted environment and I got some data with syslog.  We  are create some regex for field extraction on captain SH not on Deployer (possibly this part should be search time field extraction) and everything works but I dont get it. I know captain SH replace the conf file for other SH. İs it process automatic or not? I cant see same config on deployer, I think its normal because we dont have app for this logsource just we use custom parse. İf I had a app, I can use deployer but in this case wondering custom process. Somebody can explain for me these process? why we didnt use Deployment server for HF parsing?(maybe its other way I'm not clear)  
Hi Team/Community, I'm having an issue with a lookup file. I have a csv with two columns, 1st is named ioc and second is named note. This csv is an intel file created for searching for any visits to... See more...
Hi Team/Community, I'm having an issue with a lookup file. I have a csv with two columns, 1st is named ioc and second is named note. This csv is an intel file created for searching for any visits to malicious urls for users. The total number of lines for this csv is 66,317. The encoding for this csv is ascii. We keep having an issue with this search where when running it Splunk will give an error message of Regex: regular expression is too large. This error isn't consistent and it this particular alert doesn't  seem to trigger much for many of our customers. We also have a similar separate alerts built for looking for domains and IPs. Those work much better than this url csv. I would like help in seeing with the issue is with the regex being too large. Is the number of lines causing a major issue with this file running properly? Any help would be greatly appreciated as I think this search is not running efficiently.
Start by illustrating/mockup/explaining data input. (Anonymize as needed.)  What are data fields of interest?  In your search, what does the field 'query' stand for? What are sample/mock values?  Wha... See more...
Start by illustrating/mockup/explaining data input. (Anonymize as needed.)  What are data fields of interest?  In your search, what does the field 'query' stand for? What are sample/mock values?  What does "Workstation" do in your logic? Anyway, your sort command has no effect because "combo" is no longer available after stats.  If anything, you can try index=foo Host=<variable> | streamstats count(query) as Domains by User query Workstation | eval combo=Domains +" : "+ query | stats values(combo) as "Unique Hits : Domain" by User Workstation | sort - "Unique Hits : Domain"
The License Manager can show license usage over time.  It can group the usage by index or sourcetype to help you find where the usage is coming from. Sudden spikes can be normal periodic increases, ... See more...
The License Manager can show license usage over time.  It can group the usage by index or sourcetype to help you find where the usage is coming from. Sudden spikes can be normal periodic increases, but often are caused by a new data source or existing source that changed verbosity (turned on DEBUG logging, for instance).
Hi,  We are seeing the sudden spike of the license consumption in our splunk es since last week, Where do we get to see the all indexes license consumption  daily wise,, what is the cause of this ... See more...
Hi,  We are seeing the sudden spike of the license consumption in our splunk es since last week, Where do we get to see the all indexes license consumption  daily wise,, what is the cause of this sudden splunk ?
TYVM for the reply and info
If presenting result in multivalue format is that important, you can pad decimal octet IPv4 address to full three digits, then use mvsort.   | makeresults | eval ip = split("119.0.6.159,62.0.3.75,6... See more...
If presenting result in multivalue format is that important, you can pad decimal octet IPv4 address to full three digits, then use mvsort.   | makeresults | eval ip = split("119.0.6.159,62.0.3.75,63.0.3.84,75.0.3.80,92.0.4.159", ",") ``` data emulation above ``` | eval idx = mvrange(0,4) | foreach ip mode=multivalue [eval sorted_ip = mvappend(sorted_ip, mvjoin(mvmap(idx, printf("%.3d", tonumber(mvindex(split(<<ITEM>>, "."), idx)))), "."))] | eval sorted_ip = mvsort(sorted_ip) | table ip sorted_ip   You'll get ip sorted_ip 119.0.6.159 62.0.3.75 63.0.3.84 75.0.3.80 92.0.4.159 062.000.003.075 063.000.003.084 075.000.003.080 092.000.004.159 119.000.006.159 If you want the decimal octets to be stripped of padding, you can do that with printf or any number of other methods.  I'll leave this for your homework.
may we know if you have a working splunk environment (splunk indexer(s), linux UF's already sending logs to indexer, required apps installed on SH, etc..)  if yes, pls suggest us what things exactly... See more...
may we know if you have a working splunk environment (splunk indexer(s), linux UF's already sending logs to indexer, required apps installed on SH, etc..)  if yes, pls suggest us what things exactly you have..  OR do you have nothing and you want to start from zero..