All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Since data from scripted input is ingested "as is", you really have two and a half choices. 1. Sent the metadata about the file being read by your script as part of your event (if it's not needed in... See more...
Since data from scripted input is ingested "as is", you really have two and a half choices. 1. Sent the metadata about the file being read by your script as part of your event (if it's not needed in the event itself you should then parse it out from the event with props/transforms into the source field or any other indexed field and possibly remove it from the _raw message. 2. You could use the HEADER_MODE setting to dynamically insert metadata into your event stream (but that's actually not that much different from point 1 since you'd have to insert it with every single event. And it's a very unusual configuration option so it might be difficult to maintain. 3. That's the half-choice since it requires you to change the approach to the ingestion process - rewrite your script so that it does what it does but sends it to an HEC /data endpoint and run it independently from your Splunk instance. This way you can easily manipulate additional fields freely.
Maintaining dynamic assets is a bit of a difficulty. Since you're talking about Assets and Identities I assume you're talking about Enterprise Security. But you have to ask yourself what do you want ... See more...
Maintaining dynamic assets is a bit of a difficulty. Since you're talking about Assets and Identities I assume you're talking about Enterprise Security. But you have to ask yourself what do you want from such asset database. Because if users are using dynamic IPs (as is typical for consumers internet connections) such database built on single time connections will be very unreliable and quickly outdated. So it's not only about how to build such database (because that's probably down to using some more or less clever scripting to retrieve the data from - for example - company webserver logs or VPN service, save it to a file and push it to ES as an asset lookup) but about what/how do you want to use it.
I want to merge the cells in column S.No and share the output to the requestor. The only ask is Splunk should take all the values seperated in different colours and send three different emails. Ex ... See more...
I want to merge the cells in column S.No and share the output to the requestor. The only ask is Splunk should take all the values seperated in different colours and send three different emails. Ex    S.No. 1 2 3 4 5 6 7 8 9 10 11 12 I should send emails to  S.No  1 4 9  
Hi IT Whisperer, I was hopping for a transaction parameter which allows to handle such case, but I understand from your answer that the search time range is a hard limit. Filtering events means t... See more...
Hi IT Whisperer, I was hopping for a transaction parameter which allows to handle such case, but I understand from your answer that the search time range is a hard limit. Filtering events means that I'll loose a bit of data at the end of the range, but I can live with it. I've worked something like this to filter on the last 60 sec: | transaction ... maxevents=2 | eventstats max(_time) as latestMessage | where !(eventcount = 1 AND _time > latestMessage-60) Thanks for your help.
Any eventlog you can see in the Event Viewer can be ingested into Splunk. It's just that you have to address it properly. The easiest way to find the proper name is to go to Event Viewer, find your e... See more...
Any eventlog you can see in the Event Viewer can be ingested into Splunk. It's just that you have to address it properly. The easiest way to find the proper name is to go to Event Viewer, find your eventlog, click RMB, select properties and see the Full Name field. In case of your log it's: So you need to define a proper inputs.conf stanza for this log: [WinEventLog://Microsoft-Windows-Privacy-Auditing/Operational] index=<your_destination_index> disabled=0  
Hi @gcusello, I went through the documentation but could not find any example to route HTTP inputs. Will this still work if I plan to use "outputgroup=<string>" in inputs.conf of HEC?   //Yash  
{ "visualizations": { "viz_yxqQUaLH": { "type": "splunk.table", "options": { "columnFormat": { "Websphere": { ... See more...
{ "visualizations": { "viz_yxqQUaLH": { "type": "splunk.table", "options": { "columnFormat": { "Websphere": { "rowBackgroundColors": "> table | seriesByName(\"Websphere\") | matchValue(nameColumnFormatConfig)" }, "GUI": { "rowBackgroundColors": "> table | seriesByName(\"GUI\") | matchValue(nameColumnFormatConfig)" } }, "backgroundColor": "transparent", "tableFormat": { "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableRowBackgroundColorsByBackgroundColor)", "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)", "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)", "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)" } }, "dataSources": { "primary": "ds_1ZOKlMox" }, "context": { "nameColumnFormatConfig": [ { "match": "DOWN", "value": "#FF0000" }, { "match": "UP", "value": "#00FF00" } ] } } }, "dataSources": { "ds_1ZOKlMox": { "type": "ds.search", "options": { "query": "| makeresults\n| eval Websphere=mvindex(split(\"DOWN,UP\",\",\"),random()%2)\n| eval GUI=mvindex(split(\"DOWN,UP\",\",\"),random()%2)\n| table Websphere GUI", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Search_2" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": {}, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_yxqQUaLH", "type": "block", "position": { "x": 20, "y": 20, "w": 480, "h": 230 } } ], "globalInputs": [] }, "description": "", "title": "Webstatus" }
| eval correlation1=coalesce(ID_1_A, ID_2_A) | eval correlation2=coalesce(ID_1_B, ID_2_B) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 ... See more...
| eval correlation1=coalesce(ID_1_A, ID_2_A) | eval correlation2=coalesce(ID_1_B, ID_2_B) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 | eval correlation1=coalesce(ID_1_A, ID_2_A) | eval correlation2=coalesce(ID_1_B, ID_2_C) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 | eval correlation1=coalesce(ID_1_A, ID_2_B) | eval correlation2=coalesce(ID_1_B, ID_2_A) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 | eval correlation1=coalesce(ID_1_A, ID_2_B) | eval correlation2=coalesce(ID_1_B, ID_2_C) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 | eval correlation1=coalesce(ID_1_A, ID_2_C) | eval correlation2=coalesce(ID_1_B, ID_2_A) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 | eval correlation1=coalesce(ID_1_A, ID_2_C) | eval correlation2=coalesce(ID_1_B, ID_2_B) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2
Yes I have added it. Please find below the complete source code. { "type": "ds.search", "options": { "query": "index = index host=hostname source=\"/var/log/history-*.log\" servernam... See more...
Yes I have added it. Please find below the complete source code. { "type": "ds.search", "options": { "query": "index = index host=hostname source=\"/var/log/history-*.log\" servername | table Websphere GUI \n| eval Websphere=if(Websphere=\"0\",\"UP\",\"DOWN\")\n| eval GUI=if(GUI=\"0\",\"UP\",\"DOWN\")", "queryParameters": { "earliest": "-10m@m", "latest": "now" }, "refresh": "10m", "refreshType": "delay" }, "options": { "columnFormat": { "Websphere": { "rowBackgroundColors": "> table | seriesByName(\"Websphere\") | matchValue(WebsphereColumnFormatConfig)" } } }, "context": { "WebsphereColumnFormatConfig": [ { "match": "DOWN", "value": "#FF0000" }, { "match": "UP", "value": "#00FF00" }, ] } "name": "DC Web Server _ search" }
Hi @Jananie.Rajeshwari, Thanks for sharing your concern. I've shared this with the team. We'll get back to you soon. 
Try the appendpipe as I suggested index = events_prod_tio_omnibus_esa ( "SESE030" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s]+)" |stats latest(Nb_msg) as Back_log | ap... See more...
Try the appendpipe as I suggested index = events_prod_tio_omnibus_esa ( "SESE030" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s]+)" |stats latest(Nb_msg) as Back_log | appendpipe [| stats count | where count=0 | rename count as Back_log] | table Back_log
Hi Splunk Community,   I need help to write a Splunk query to join two different indexes using any Splunk command that will satisfy the logic noted below.   Two separate ID fields in Index 2 ... See more...
Hi Splunk Community,   I need help to write a Splunk query to join two different indexes using any Splunk command that will satisfy the logic noted below.   Two separate ID fields in Index 2 must match two separate ID fields in Index 1 using any permutation of index 2’s three ID fields. Here is an outline of the logic below.   Combine index 1 record with index 2 record into a single record when any matching condition is satisfied below: (ID_1_A=ID_2_A AND ID_1_B=ID_2_B) OR (ID_1_A=ID_2_A AND ID_1_B=ID_2_C) OR (ID_1_A=ID_2_B AND ID_1_B=ID_2_A) OR (ID_1_A=ID_2_B AND ID_1_B=ID_2_C) OR (ID_1_A=ID_2_C AND ID_1_B=ID_2_A) OR (ID_1_A=ID_2_C AND ID_1_B=ID_2_B) Sample Data:   Index 1: ----------------------- |ID_1_A |ID_1_B| ------------------------ |123       |345      | ------------------------ | 345      |123      | ------------------------   Index 2: ________________________ |ID_2_A   | ID_2_B   | ID_2_C| ---------------------------------------- |123         |345           |999       | ---------------------------------------- |123         |999           |345       | ---------------------------------------- |345         |123            |999      | ---------------------------------------- |999         |123           | 345       | ---------------------------------------- | 345       | 999           |123        | ----------------------------------------   Any help would be greatly appreciated.   Thanks.  
Extending the end time is the solution, only you then have to filter out any transactions which started after your required time period end time. (How else are you going to find the ends of the trans... See more...
Extending the end time is the solution, only you then have to filter out any transactions which started after your required time period end time. (How else are you going to find the ends of the transactions if you don't include these events in your search?)
Sorry , my query was not that.  I will try to explain it again.  Query :  index = events_prod_tio_omnibus_esa ( "SESE030" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s... See more...
Sorry , my query was not that.  I will try to explain it again.  Query :  index = events_prod_tio_omnibus_esa ( "SESE030" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s]+)" |stats latest(Nb_msg) as Back_log If there is no record fetched in last 15 mins , then currently it is showing "No results found. Try expanding the time range." I will to display the number as 0 instead of "No results found. Try expanding the time range.".  Is it possible ?? 
It looks like you didn't read what I had suggested properly as you have missed the "options" key
I am not sure I understand - if you restrict the search to the last 15 minutes, you will either get a number of events or none. If you want to determine how many events you have you could do this in... See more...
I am not sure I understand - if you restrict the search to the last 15 minutes, you will either get a number of events or none. If you want to determine how many events you have you could do this index = events_prod_tio_omnibus_esa ( "SESE023" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s]+)" | rex field=msg "NB\s* (?<Msg_typ>[^\s]+)" | table Nb_msg | appendpipe [| stats count] | table count | where isnotnull(count)
Thank you very much!!!!
Hi @shadysplunker, did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input ? ... See more...
Hi @shadysplunker, did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input ? see "Perform selective indexing and forwarding". Ciao. Giuseppe
Hi, We are collecting the logs directly though UF and HEC in the indexer cluster. All inputs are defined in Cluster Manager and then bundle is applied to indexers.  Currently we are sending the dat... See more...
Hi, We are collecting the logs directly though UF and HEC in the indexer cluster. All inputs are defined in Cluster Manager and then bundle is applied to indexers.  Currently we are sending the data to Splunk indexers as well as to other output group. Following is the config of outputs.conf under peer-apps:   [tcpout] indexandForward=true   [tcpout:send2othergroup] server=..... sslPassword=..... sendCookedData=true   This config is currently sending the same data to both the outputs. Indexing locally and then forwarding to another group.  Is there a way to keep some indexes to be indexed locally and some to be sent only to another group? I tried using props and transforms by including _TCP_ROUTING but it is not working at all.    Thanks in advance!