All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Jailson  What exactly are you looking for? Could you elaborate a bit more?
I have a survey that has a date field deletion_date. How can I filter this field by the Time range?     sourcetype=access_* status=200 action=purchase | top categoryId |where deletion_date > ... See more...
I have a survey that has a date field deletion_date. How can I filter this field by the Time range?     sourcetype=access_* status=200 action=purchase | top categoryId |where deletion_date > ?        
Hi, Here is a scenario: Step 1 9h30 TradeNumber 13400101 gets created in system 9h32 TradeNumber 13400101 gets sent to market Step 2 9h45 TradeNumber 13400101 gets modified in system 9h50 Tr... See more...
Hi, Here is a scenario: Step 1 9h30 TradeNumber 13400101 gets created in system 9h32 TradeNumber 13400101 gets sent to market Step 2 9h45 TradeNumber 13400101 gets modified in system 9h50 TradeNumber 13400101 gets sent to market with modification Step 3 9h55 TradeNumber 13400101 gets cancelled in system 9h56 TradeNumber 13400101 gets sent to market as cancelled I need to monitor the Delay for sending the order to market. In the above scenario we have 3 steps for the same TradeNumber and each needs to be calculated separately. Delay for sending new trade Delay for modifying Delay for cancelling The log does not allow to differenciate the steps but the sequence is always in the right order. If I use | stats range(_time) as Delay by TradeNumber | stats max(Delay) For TradeNumber 13400101, it will return 26mins I am looking to have a result of 5mins (gets modified ,9h45 to 9h55) Anyway Splunk can match by sequence (or something else) and TradeNumber to calculate 3 values for the same TradeNumber ?
Regular expressions don't handle negation well.  The given regex will match the sample event because the third character does not consist of "EventType".  It's probably better to index matching event... See more...
Regular expressions don't handle negation well.  The given regex will match the sample event because the third character does not consist of "EventType".  It's probably better to index matching events and discard the rest. [solarwinds:alerts] TRANSFORMS-t=keep-5000, delete-others [keep-5000] REGEX = ("EventType": 5000) DEST_KEY = queue FORMAT = indexQueue [delete-others] REGEX = . DEST_KEY = queue FORMAT = nullQueue
Hi @gcusello I can confirm that regex is correct bcz I see the app names when I display it on the table.  The problem is its not returning anything when there are devices with only uninstall event... See more...
Hi @gcusello I can confirm that regex is correct bcz I see the app names when I display it on the table.  The problem is its not returning anything when there are devices with only uninstall event, but no subsequent install event for the same application.  Also, not sure why I am keep on getting events for the same application being "removed successfully" everyday when there is no installation of the application later on.
I want to send the all the event to nullqueue except having match "EventType": 5000.   {"EventID": 2154635, "EventType": 5000, "NetObjectValue": null, "EngineID": null}   [solarwinds:alerts] TRA... See more...
I want to send the all the event to nullqueue except having match "EventType": 5000.   {"EventID": 2154635, "EventType": 5000, "NetObjectValue": null, "EngineID": null}   [solarwinds:alerts] TRANSFORMS-t=eliminate-except-5000   [eliminate-except-5000] REGEX=[\w\W]+[^("EventType": 500)] DEST_KEY=queue FORMAT=nullQueue
The values and list functions display results in lexicographic order and destroy any potential relationship among the fields.  One solution is use mvzip to combine fields, group the results, then unz... See more...
The values and list functions display results in lexicographic order and destroy any potential relationship among the fields.  One solution is use mvzip to combine fields, group the results, then unzip the fields. index=okta or index=network | iplocation (src_ip) | eval tuple = mvzip(src_ip, mvzip(deviceName, mvzip(City, Country))) | stats values(tuple) by user, index | eval fields = split(tuple, ",") | eval src_ip = mvindex(fields, 0), deviceName=mvindex(fields,1), City=mvindex(fields, 2), Country=mvindex(fields,3) A better approach might be to perform the iplocation command after stats. index=okta or index=network | stats values(src_ip) as src_ip by user, index | mvexpand src_ip | iplocation (src_ip)  
Hi @azer271 , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I believe I have managed to get myself confused and would like to request assistance about field extraction. I have a new heavy forwarder, which is going to connect Splunk Cloud. First, the heavy fo... See more...
I believe I have managed to get myself confused and would like to request assistance about field extraction. I have a new heavy forwarder, which is going to connect Splunk Cloud. First, the heavy forwarder will act as a simple Splunk Enterprise instance, before connecting to Splunk Cloud. The HF installed apps, such as  Fortinet Fortigate Add-on for Splunk,  Splunk Add-on for Palo Alto Networks,  Splunk Add-on for Microsoft Windows, Splunk Add-on for Checkpoint Log Exporter. I just simply installed and created inputs in local folder and they are good to go in HF. In the Splunk Enterprise instance, all inputs work fine. All fields are parsed properly, such as checkpoint logs, PA logs, Windows xml logs, fortigate logs. However, after connecting to Splunk Cloud, the universal forwarder credentials package is downloaded from Splunk Cloud and the app is installed in the HF. The connection is fine and logs are receiving. The weird issue is ONLY checkpoint and fortigate logs' fields are all extracted successfully, when I search in Splunk Cloud. For some reason, the Windows logs show a surprisingly small number of fields being extracted, when I search in Splunk Cloud. When I search the windows logs (old data in test index) in HF, it shows a LOT of interesting fields (>300), which is great. The PA logs only extracted host, index, source, sourcetype, _time (including default ones like linecount, punct, splunk_server), when I search in Splunk Cloud. I am confused because checkpoint and fortigate logs are all extracted successfully, but others are not. I understand that the apps are recommended to install across the deployment (https://docs.splunk.com/Documentation/AddOns/released/Overview/Wheretoinstall), but I would like to know a reason why some apps work and some apps do not. They are only installed in HF and the fields should be all extracted in the forwarder layer? Is it possible that the field extraction is not finished, since there are just too much data coming or too much data in total (PA logs >10000 events last 30 mins, windows logs >2000 events last 30 mins)? Thanks. I appreciate your help.
Aren't you by any chance ingesting your events as XML?
Hi @Raees  As previously mentioned, the Splunk inbuilt webhooks use a POST with a pretty non-configurable output. You can use https://splunkbase.splunk.com/app/7450 which allows much more customisa... See more...
Hi @Raees  As previously mentioned, the Splunk inbuilt webhooks use a POST with a pretty non-configurable output. You can use https://splunkbase.splunk.com/app/7450 which allows much more customisation. Here is a working example.   I installed the app and created an alert action as below:   Put url as: https://api.telegram.org/bot<yourToken>/sendMessage Payload: { "chat_id":"<yourChatID>", "text": "$result.msg$" } This will send the value of the "msg" field from Splunk search, obviously you can update this and use more fields if required too. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
There is yet another option - you can convert your installation to a cluster (you can have a cluster without default replication so that you don't have multiple copies of the same data). You need add... See more...
There is yet another option - you can convert your installation to a cluster (you can have a cluster without default replication so that you don't have multiple copies of the same data). You need additional machine for Cluster Manager and you need to convert your buckets to clustered ones but then you can rely on cluster mechanics to move the data out of the old indexer as you decomission it.  
Hi @nksiba  To display only the current state of each machine without duplicates, you need to filter the events to show only the latest event for each machine. You can achieve this by using the stat... See more...
Hi @nksiba  To display only the current state of each machine without duplicates, you need to filter the events to show only the latest event for each machine. You can achieve this by using the stats command to group the events by MachineName (and any other relevant fields) and then use the latest function to get the most recent event for each group. Here's how you can modify your query: index=idx1 sourcetype=machines_monitoring | stats latest(RunId) as RunId, latest(State) as State, latest(Environment) as Environment, latest(Version) as Version by MachineName | table RunId, MachineName, Environment, Version, State Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hello dear Community! I have a set of separate machines logging number of different events to Splunk, each group can be identified by some unique 'RunId' field. Each machine send event multiple time... See more...
Hello dear Community! I have a set of separate machines logging number of different events to Splunk, each group can be identified by some unique 'RunId' field. Each machine send event multiple times per day. Via the some simple 'table' query I can display all collected info on the Dashboard, like ``` index=idx1 sourcetype=machines_monitoring | table RunId, MachineName, Environment, Version, State ``` Now I have a lot of raws displayed for each machine with the different information about the each machine's state. How can I filter the events to have the table showing only the current state of each machine, without duplicates, using only the latest group of events sent by each machine? I've tried 'latest(RunId) by RunId, MachineName, Environment, Version, State` with no changes, all duplicated values are displayed as usual.
Thanks for the POST details, I don't see how all the info is supposed to be entered as there is only a field for the URL  
Hi @Raees  Unfortunately the Inbuilt webhook POST sends a payload as below, which isnt possible to change the format of. { "result": { "sourcetype" : "mongod", "count" : "8" }, "sid" : "sch... See more...
Hi @Raees  Unfortunately the Inbuilt webhook POST sends a payload as below, which isnt possible to change the format of. { "result": { "sourcetype" : "mongod", "count" : "8" }, "sid" : "scheduler_admin_search_W2_at_14232356_132", "results_link" : "http://web.example.local:8000/app/search/@go?sid=scheduler_admin_search_W2_at_14232356_132", "search_name" : null, "owner" : "admin", "app" : "search" } I think this should be achievable with the https://splunkbase.splunk.com/app/4146 app - although possibly not as an alert action, but could be achieved by adding the necessary commands on the end of your SPL. I will see if I can work together an example PLease let me know if this helped by adding karma and/or accepting as an answer if this resolves the issue for you.
@Raees  Sure, Please check.    Splunk’s webhook alert action sends a POST request to a specified URL. The payload is typically in JSON format, and you can customize it using tokens (e.g., $... See more...
@Raees  Sure, Please check.    Splunk’s webhook alert action sends a POST request to a specified URL. The payload is typically in JSON format, and you can customize it using tokens (e.g., $result.field$) to include alert details. Telegram’s Bot API expects either a GET request with query parameters or a POST request with a JSON body. Get Your Telegram Bot Token and Chat ID   You already seem to have these: Bot Token: ######### (replace with your actual token from BotFather). Chat ID: -######## (the ID of the group or chat, including the - for groups). Set Up the Webhook in Splunk   In Splunk, go to Settings > Alert Actions > Webhook (or configure it as part of an alert). URL: Use the Telegram API endpoint without query parameters  https://api.telegram.org/bot<your-bot-token>/sendMessage Replace <your-bot-token> with your actual bot token (e.g., https://api.telegram.org/bot123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11/sendMessage).   Custom Payload: Splunk allows you to define a JSON payload. Telegram expects chat_id and text as parameters. Here’s an example payload:   { "chat_id": "-########", "text": "Alert from Splunk: $result.message$" } Replace -######## with your actual chat ID. $result.message$ is a placeholder for a field from your search results (adjust based on your data; common tokens include $result.sourcetype$, $result.host$, or $trigger_reason$). Test the Webhook   Create a test alert in Splunk: Go to Search, run a simple query (e.g., index=_internal | head 1). Save it as an alert, set the trigger condition (e.g., number of results > 0), and choose the Webhook action. Enter the URL and payload as described above. Trigger the alert and check your Telegram chat for the message. NOTE:-  Ensure the payload is valid JSON and matches Telegram’s API expectations (see https://core.telegram.org/bots/api#sendmessage If $result.message$ doesn’t work, replace it with a static string (e.g., "text": "Test alert") to verify the setup, then adjust the token.   Example Configuration   Webhook URL: https://api.telegram.org/bot123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11/sendMessage  Payload:   { "chat_id": "-123456789", "text": "Splunk Alert: $result.host$ triggered an event at $trigger_time$" }  
@kiran_panchavat Thank you, this helps, will look up how to send a POST request to Telegram
@Raees  Configure the webhook with the following details: URL: https://api.telegram.org/bot<YourBotToken>/sendMessage  HTTP Method: POST Request Payload: { "chat_id": "<YourChatID>", "text": "... See more...
@Raees  Configure the webhook with the following details: URL: https://api.telegram.org/bot<YourBotToken>/sendMessage  HTTP Method: POST Request Payload: { "chat_id": "<YourChatID>", "text": "Alert: $result.message$" } Trigger Conditions: Set the conditions under which the alert should trigger. Test the Webhook: Save the alert and test it to ensure that messages are being sent to your Telegram chat. Here’s an example of how the webhook URL and payload might look: { "url": "https://api.telegram.org/bot123456789:ABCdefGHIjklMNOpqrSTUvwXYZ/sendMessage", "method": "POST", "payload": { "chat_id": "-987654321", "text": "Alert: $result.message$" } } Make sure to replace <YourBotToken> and <YourChatID> with your actual bot token and chat ID.