All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello I've encountered an issue in my Splunk environment that's been causing some headaches. When running a search, I receive the following error message: "Search Peer has the following message: 'Er... See more...
Hello I've encountered an issue in my Splunk environment that's been causing some headaches. When running a search, I receive the following error message: "Search Peer has the following message: 'Error in 'SearchParser': The search specifies a macro 'my_macro' that cannot be found.'" This error seems to be related to a missing macro called 'my_macro,' but I'm unsure why this is happening and how to resolve it. I've checked my search query, and it appears to be correct. Can anyone provide some guidance on what might be causing this error and how I can go about resolving it? Any help or insights would be greatly appreciated! Thank you.
What do you mean "it doesn't do anything"? Please share the search and the results
Hi Team, I am trying to schedule a alert base on threshold for 2 time window. If fall to 0 events between 23:00 to 07:00 If fall to less then 20 events between   07:00 to 23:00    is it poss... See more...
Hi Team, I am trying to schedule a alert base on threshold for 2 time window. If fall to 0 events between 23:00 to 07:00 If fall to less then 20 events between   07:00 to 23:00    is it possible to define 2 threshold like above ? in one alert index=ABC sourcetype=XYZ login |stats count |where count =0 between23:00 to 07:00 index=ABC sourcetype=XYZ login |stats count |where count <=20 between 07:00 to 23:00  Please advise. Thank you
thanks.....   please help below  message:  (loggingfilterresults) - GET|/ready/term/planess|||||||metrics need uri field
Don't understand what you mean. It does not do anything. How do I get the result I am asking for?
In outputs.conf you can configure compressed = <boolean> to compress the data, but the documentation doesn't specify how the compression is done。   There is also no parameter specifying the compres... See more...
In outputs.conf you can configure compressed = <boolean> to compress the data, but the documentation doesn't specify how the compression is done。   There is also no parameter specifying the compression method.   So my question is what compression is used by default, and whether there is any documentation on it to show that
We have a splunk enterprise installation where everything is on the same server/install (searchhead etc.). At the moment we have a script that shuts down the splunk services and then zip's the whole... See more...
We have a splunk enterprise installation where everything is on the same server/install (searchhead etc.). At the moment we have a script that shuts down the splunk services and then zip's the whole /opt/splunk/ folder and copies it to a NAS. Problem is that this action takes about 1,5h and during that time we won't be able to reach splunk (since the service is shutdown). Would it be possible to do this "on the fly" instead of shutting down the service and just do the zip of the entire folder when it is "alive".  My thinking is that this won't be optimal since then bucket files will be "open" etc.   But what are your take on this? maybe another better solution?  
Try something like this   | rex "uri=(?<uri>\S+)"  
Hi  please help below  message :   httpStatusCode=300 method=GET uri=/ralt/gart/readyness uuid=- need uri field  
[search] |stats count by ClientName Outcome | eventstats sum(count) as total by ClientName | eval percent=100*count/total
[search] |stats count by ClientName Outcome example: Client1 Positive count Client1 Negative count Client2 Positive count Client2 Negative count Client 2 Unknown count How do I get t... See more...
[search] |stats count by ClientName Outcome example: Client1 Positive count Client1 Negative count Client2 Positive count Client2 Negative count Client 2 Unknown count How do I get the percentage for each client outcomes? Client1 Positive count % Client1 Negative count  % Client2 Positive count % Client2 Negative count % Client2 Unknown count %  
Hi all,   So here is the deal, I have to prepare some( a lot) db_outputs(using db_connect), however the corresponding tables are not yet existing. Colleagues responsible for that are on different t... See more...
Hi all,   So here is the deal, I have to prepare some( a lot) db_outputs(using db_connect), however the corresponding tables are not yet existing. Colleagues responsible for that are on different tasks. I would like to configure the exports in advance, so that once the tables are ready, the output would just flow. (and I might not be able to work on that later)  I did not manage to find a way using the GUI for that , as it always requires every step of the the way to be fulfilled, so even if I had data waiting for me, I would not be able to prep the field matching. So my idea is to configure them in db_outputs.conf, then a restart of the HF, should (or at least i think) be the solution. However, there is this.... customized_mappings = <string> # required # Specifies the output data name (fieldx) and database column number (1...n) mappings. # The expected format is: # field1:column1:type1,field2:column2:type2,…,fieldN:columnN:typeN And I do not know where to get the values for the types( I already know what field will be varchar, timestamp etc.... what I do not know is numeric representation of the field types). So it is a two fold question, 1) does anybody know this numeric to field type mapping (for example varchar=12, unassigned integer=4.... these I got from previous tables) ? 2) has anyone configured outputs in advance, before the corresponding table is even created, and does it start later automatically?  have fun! rd
Hi @Adpafer , as I said, tcpout is a configuration to send logs from Splunk to another Splunk Indexer, not using syslogs, to use syslogs, you can use the method described in my above url. What's yo... See more...
Hi @Adpafer , as I said, tcpout is a configuration to send logs from Splunk to another Splunk Indexer, not using syslogs, to use syslogs, you can use the method described in my above url. What's your architecture: have you a distributed architecture (Searche Heads and Indexers) or a standalone instance? As I described, the solution depends on it. probably you need the activity of a Splunk Architect to design your flow. Ciao. Giuseppe
It is not my decision. The requirement is to send logs from indexer. I did dedicated app on indexer to send logs from one index to qradar port 514 and it works fine: outputs.conf: [tcpout] forward... See more...
It is not my decision. The requirement is to send logs from indexer. I did dedicated app on indexer to send logs from one index to qradar port 514 and it works fine: outputs.conf: [tcpout] forwardedindex.3.blacklist = (.*) forwardedindex.4.whitelist = (indexA) [tcpout:tcp_qradar_10_10_10_10_514] disabled = false sendCookedData = false server = 10.10.10.10:514   props.conf: [source::9997] TRANSFORMS-routing = send_to_qradar_tcp_10_10_10_10_514   transforms.conf [send_to_qradar_tcp_10_10_10_10_514] DEST_KEY = _TCP_ROUTING FORMAT = tcp_qradar_10_10_10_10_514 REGEX = .   And now I have to add another rule for indexB to be forwarded from indexer to the same IP but port 12468. I do not how  to do it   regards, pawelF  
Hi MC is the best option to start that analysing, but as is has limited to some amount of indexes etc. and you probably want to see more you can use it easily as a starting point! Just select e.g wh... See more...
Hi MC is the best option to start that analysing, but as is has limited to some amount of indexes etc. and you probably want to see more you can use it easily as a starting point! Just select e.g what @richgalloway said and then click that small magnifying glass on bottom of panel and it will open you a search window where you could modify that search as you need. You could e.g. show more individual indexes or extend historic time etc. with your "own" search. r. Ismo
Hi @Adpafer, I suppose that you're sèeaking of forwarding by syslog. the configurations you used are for sending logs to other Indexers not to a third party In this case, you should follow the ins... See more...
Hi @Adpafer, I suppose that you're sèeaking of forwarding by syslog. the configurations you used are for sending logs to other Indexers not to a third party In this case, you should follow the instructions at https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd#Send_a_subset_of_data_to_a_syslog_server  Usually this configuration is used on Heavy Forwarders, not on Indexers, have you HFs in your architecture? if yes, you can configure them as described in the above documentation, if not, you could use the Syslog Mod Alert App (https://splunkbase.splunk.com/app/4199), even if isn't certified on Splunk 9.x, but on Search Heads, not on Indexers. Ciao. Giuseppe
Hi at search time you could use DELIMS with props.conf & transforms.conf DELIMS = <quoted string list> * NOTE: This setting is only valid for search-time field extractions. * IMPORTANT: If a value ... See more...
Hi at search time you could use DELIMS with props.conf & transforms.conf DELIMS = <quoted string list> * NOTE: This setting is only valid for search-time field extractions. * IMPORTANT: If a value may contain an embedded unescaped double quote character, such as "foo"bar", use REGEX, not DELIMS. An escaped double quote (\") is ok. Non-ASCII delimiters also require the use of REGEX. * Optional. Use DELIMS in place of REGEX when you are working with ASCII-only delimiter-based field extractions, where field values (or field/value pairs) are separated by delimiters such as colons, spaces, line breaks, and so on. * Sets delimiter characters, first to separate data into field/value pairs, and then to separate field from value. * Each individual ASCII character in the delimiter string is used as a delimiter to split the event. * Delimiters must be specified within double quotes (eg. DELIMS="|,;"). Special escape sequences are \t (tab), \n (newline), \r (carriage return), \\ (backslash) and \" (double quotes). * When the event contains full delimiter-separated field/value pairs, you enter two sets of quoted characters for DELIMS: * The first set of quoted delimiters extracts the field/value pairs. * The second set of quoted delimiters separates the field name from its corresponding value. * When the event only contains delimiter-separated values (no field names), use just one set of quoted delimiters to separate the field values. Then use the FIELDS setting to apply field names to the extracted values. * Alternately, Splunk software reads even tokens as field names and odd tokens as field values. * Splunk software consumes consecutive delimiter characters unless you specify a list of field names. * The following example of DELIMS usage applies to an event where field/value pairs are separated by '|' symbols and the field names are separated from their corresponding values by '=' symbols: [pipe_eq] DELIMS = "|", "=" * Default: "" But on ingesting time you must use REGEX to separate those if needed. Are you sure that you need this on ingest time and search time is not enough? r. Ismo
why are you trying to forward from indexing layer and not from forwarding layer directly. setup the outputs in HF or SF to send data to qradar and splunk instead of from indexers. Ideally I would do ... See more...
why are you trying to forward from indexing layer and not from forwarding layer directly. setup the outputs in HF or SF to send data to qradar and splunk instead of from indexers. Ideally I would do that.
Nice to hear that you found solution or actually several. You should remember that with splunk there are almost always several ways to do things, not only one! When you need to select "the best" one, ... See more...
Nice to hear that you found solution or actually several. You should remember that with splunk there are almost always several ways to do things, not only one! When you need to select "the best" one, you should look performance etc. from job inspector to understand better how those are working. Happy Splunking!
Hi @bowesmana the data comes from Splunk index and it is csv file