All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello there, we're in process of deploying Splunk Cloud. We have installed the Microsoft Office 365 App for splunk along with all the required add-ons. The app is working as intended except that we'r... See more...
Hello there, we're in process of deploying Splunk Cloud. We have installed the Microsoft Office 365 App for splunk along with all the required add-ons. The app is working as intended except that we're not getting any Message Trace data. We followed the instructions to properly setup the the Add-on Input and assigned the API permissions on the Azure side. For whatever reason we're still not getting any Trace Data. It looks like problem it's on the Azure side, we have assigned the appropriate API permissions as stated in the documentation. Is there anything else that needs to be setup on the Azure - or Splunk side to get Exchange Trace data?   We followed this instructions for Splunk add-on for Microsoft Office 365 integration. https://docs.splunk.com/Documentation/AddOns/released/MSO365/ConfigureappinAzureAD Any help would be highly appreciated. 
I created an application in the Entra ID (single tenant) and then created a secret.  Screenshots attached. I also gave the application Azure Event Hub data receiver access in the subscription ... See more...
I created an application in the Entra ID (single tenant) and then created a secret.  Screenshots attached. I also gave the application Azure Event Hub data receiver access in the subscription The authentication fails
Hello @sushraw  Can you please try appending below -  | makemv CmdArgAV | eval CmdArgAV = replace(CmdArgAV, "\n", ", ")   The final results based on the sample event you shared would be - | mak... See more...
Hello @sushraw  Can you please try appending below -  | makemv CmdArgAV | eval CmdArgAV = replace(CmdArgAV, "\n", ", ")   The final results based on the sample event you shared would be - | makeresults | eval _raw="Mar 26 15:37:59 <device_IP> <device_name>_Passed_Authentications 0045846127 2 0 2024-03-26 14:37:59.011 +00:00 06024423114 5202 NOTICE Device-Administration: Command Authorization succeeded, ConfigVersionId=1398, Device IP Address=<device_IP>, DestinationIPAddress=<device_IP>, DestinationPort=49, UserName=<user>, CmdSet=[ CmdAV=show CmdArgAV=running-config CmdArgAV=interface CmdArgAV=Ethernet1/19 CmdArgAV=<cr> ], Protocol=Tacacs, MatchedCommandSet=Unsafecommand, RequestLatency=10, NetworkDeviceName=<device_name>" | rex field=_raw "CmdSet=\[(?<CmdSet>[^\]]+)\]" | rex field=CmdSet max_match=0 "CmdArgAV=(?<CmdArgAV>[^\s]+)" | makemv CmdArgAV | eval CmdArgAV = replace(CmdArgAV, "\n", ", ")   Below screenshot for your reference -   If this reply helps you, Karma would be appreciated.
Step 1: Prerequisites: a. Splunk® Universal Forwarder w/Splunk_TA_nix installed b. "Package.sh" should be enabled similar to the example below Note: that the UF needs to be restarted to en... See more...
Step 1: Prerequisites: a. Splunk® Universal Forwarder w/Splunk_TA_nix installed b. "Package.sh" should be enabled similar to the example below Note: that the UF needs to be restarted to enable the input if it was previously started without the input. Step 2: Deploy the updated inputs / app If you need to deploy the app out, you'll only need to deploy it to Linux hosts. Do make sure you enable splunkd restart on your app deployment Step 3. Detect the CVE Now allow time for the data to arrive at your indexing tier and you should be able to run this search as a detection source=package sourcetype=package NAME=xz-libs VERSION IN ("5.6.0","5.6.1") Note: You may need to add index=os or index=Your_Linux_TA_Data_Index_here, but by default the data will be in index=main You'll probably want to take the search a few steps further. First thing that comes to our mind is adding a "| stats latest(_time) as latest_time by host". When you manipulate _time like that you'll notice it converts to epoch, so you'll probably want to convert it back to human readable format with "| convert ctime(latest_time)". The full search might look something like this: source=package sourcetype=package NAME=xz-libs VERSION IN ("5.6.0","5.6.1") | stats latest(_time) as latest_time by host | convert ctime(latest_time) If anyone else has anything to add, please reply or add your answer.
How to detect CVE-2024-3094 with Splunk?
@ITWhisperer  Thanks a lot!!
I can confirm that the problem is fixed on version 9.2.1! I upgraded to version 9.2.1 and those indexers work perfectly without additional configuration! Thanks
Thanks. It is always best to give an accurate representation of your data as it saves everyone's time. Try like this: | makeresults | eval _raw="2024-03-29 12:25:15,276 _engine - INFO - process - r... See more...
Thanks. It is always best to give an accurate representation of your data as it saves everyone's time. Try like this: | makeresults | eval _raw="2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-012 - user-id=test01 Target language count 1 2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-123 - user-id=test01 Target language count 1 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-123 - user-id=test01 API call is True for MyEngine 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-234 - user-id=test01 API call is True for MyEngine {\"timestamp\":\"2024-03-30T11:28:58.438Z\",\"logger_name\":\"MessageHandler\",\"thread_name\":\"threadPoolTaskExecutor-22\",\"level\":\"INFO\",\"serviceArchPath\":\"worker\",\"process\":\"NA\",\"message\":\"Marked request as succeed. {\\\"status\\\":\\\"PREDICT_SUCCESS\\\",\\\"message\\\":null}, critical Path: {\\\"requestStartTime\\\":1711797346337,\\\"operationsInCriticalPath\\\":{\\\"PREDICT:Feature:01\\\":{\\\"queue_overhead_ms\\\":59,\\\"sdk_overhead_ms\\\":25,\\\"process_time_ms\\\":432265,\\\"total_time_ms\\\":432349},\\\"PREDICT:Feature:02\\\":{\\\"queue_overhead_ms\\\":68,\\\"sdk_overhead_ms\\\":17,\\\"process_time_ms\\\":358611,\\\"total_time_ms\\\":358697},\\\"PLAT_CORE:Orchestrator\\\":{\\\"queue_overhead_ms\\\":142,\\\"process_time_ms\\\":158,\\\"total_time_ms\\\":300,\\\"hbase_overhead_in_ms\\\":136},\\\"PLAT_CORE:inference-core\\\":{\\\"total_time_ms\\\":8,\\\"hbase_overhead_in_ms\\\":5},\\\"PREDICT:Feature:03\\\":{\\\"queue_overhead_ms\\\":78,\\\"sdk_overhead_ms\\\":5,\\\"process_time_ms\\\":663,\\\"total_time_ms\\\":747}},\\\"currentOperationTimingInfo\\\":{},\\\"total_time_ms\\\":792101}\",\"x-request-id\":\"testabc-123\",\"x-service-id\":\"testID\",\"x-api-key\":\"test-client\",\"x-client-id\":\"test-client\",\"invocation_id\":\"test1\",\"x-user-id\":\"test@abc.com\",\"x-access-protected-e\":\"true\",\"trace_id\":\"testabc-0000\",\"trace_flags\":\"00\",\"span_id\":\"b1\"}" | multikv noheader=t | table _raw | rex "Marked request as (?<finalStatus>\w+).+\"x-request-id\":\"(?<reqID>[^\"]+)\"" | rex field=_raw "request_id=(?<reqID>.+?) - .+(Target language count|API call is True for MyEngine)" | rex field=_raw "Target language count (?<num_target>\d+)" | rex field=_raw "API call is (?<callTrue>True) for MyEngine" | stats first(num_target) as num_target first(callTrue) as callTrue first(finalStatus) as finalStatus by reqID | where callTrue=="True" AND isnotnull(num_target)
Hi all,   Im trying to understand how rotation certificates used for SSO works in a search head cluster. We have a searchhead cluster where we have SSO working already. As for initial setup, I unde... See more...
Hi all,   Im trying to understand how rotation certificates used for SSO works in a search head cluster. We have a searchhead cluster where we have SSO working already. As for initial setup, I understand we can download SPmetadata.xml file from splunk SAML settings page. However, during rotation, how do we create this as we are using a cert thats already existing and we want to rotate the server side certificate? If we just download SPmetadata.xml for creating request for IDP, this will have same cert as we are using. If we rotate the cert first at our side so we can download SPmetadata.xml  to create request for IDP, then this will end up in error as IDP wont detect server side certificate during this, obviously.  
Thanks! You are correct and on dummy data it is working perfectly fine.  Let me share the exact event structure with dummy data on platform index (platform-va6) having final status : snap attached... See more...
Thanks! You are correct and on dummy data it is working perfectly fine.  Let me share the exact event structure with dummy data on platform index (platform-va6) having final status : snap attached for reference. upon clicking on show as _raw text -    _raw text =========== {"timestamp":"2024-03-30T11:28:58.438Z","logger_name":"MessageHandler","thread_name":"threadPoolTaskExecutor-22","level":"INFO","serviceArchPath":"worker","process":"NA","message":"Marked request as succeed. {\"status\":\"PREDICT_SUCCESS\",\"message\":null}, critical Path: {\"requestStartTime\":1711797346337,\"operationsInCriticalPath\":{\"PREDICT:Feature:01\":{\"queue_overhead_ms\":59,\"sdk_overhead_ms\":25,\"process_time_ms\":432265,\"total_time_ms\":432349},\"PREDICT:Feature:02\":{\"queue_overhead_ms\":68,\"sdk_overhead_ms\":17,\"process_time_ms\":358611,\"total_time_ms\":358697},\"PLAT_CORE:Orchestrator\":{\"queue_overhead_ms\":142,\"process_time_ms\":158,\"total_time_ms\":300,\"hbase_overhead_in_ms\":136},\"PLAT_CORE:inference-core\":{\"total_time_ms\":8,\"hbase_overhead_in_ms\":5},\"PREDICT:Feature:03\":{\"queue_overhead_ms\":78,\"sdk_overhead_ms\":5,\"process_time_ms\":663,\"total_time_ms\":747}},\"currentOperationTimingInfo\":{},\"total_time_ms\":792101}","x-request-id":"testabc-123","x-service-id":"testID","x-api-key":"test-client","x-client-id":"test-client","invocation_id":"test1","x-user-id":"test@abc.com","x-access-protected-e":"true","trace_id":"testabc-0000","trace_flags":"00","span_id":"b1"}  
@PickleRick @marnall After further investigation I found that the tcp port 8088 is being used under another app . I removed the config from there and now all are working fine. Issued screenshot: ... See more...
@PickleRick @marnall After further investigation I found that the tcp port 8088 is being used under another app . I removed the config from there and now all are working fine. Issued screenshot: Resolved screenshot: Thanks both of your support and suggestions.
Here is a runanywhere example, using some mock-up data similar to what you posted. It show the search working. The issue may be that my dummy data is not quite representative of your data. Please exa... See more...
Here is a runanywhere example, using some mock-up data similar to what you posted. It show the search working. The issue may be that my dummy data is not quite representative of your data. Please examine it to see if any corrections can be made to the dummy data, e.g. I made the events single line whereas as your example seemed to show them as multi-line but I wasn't sure if that was important. | makeresults | eval _raw="2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-012 - user-id=test01 Target language count 1 2024-03-29 12:25:15,276 _engine - INFO - process - request_id=testabc-123 - user-id=test01 Target language count 1 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-123 - user-id=test01 API call is True for MyEngine 29/03/2024 17:55:14.991 _engine - INFO - process - request_id=testabc-234 - user-id=test01 API call is True for MyEngine 29/03/2024 18:01:20.556 message: Marked request as succeed. {\"status\":\"PREDICT_SUCCESS\"} x-request-id: testabc-123" | multikv noheader=t | table _raw ``` The lines above set up some dummy data ``` | rex "Marked request as (?<finalStatus>\w+).+ x-request-id: (?<reqID>.+)" | rex field=_raw "request_id=(?<reqID>.+?) - .+(Target language count|API call is True for MyEngine)" | rex field=_raw "Target language count (?<num_target>\d+)" | rex field=_raw "API call is (?<callTrue>True) for MyEngine" | stats first(num_target) as num_target first(callTrue) as callTrue first(finalStatus) as finalStatus by reqID | where callTrue=="True" AND isnotnull(num_target)  
Another walkaround is to collect the lookup data to an index before overwriting it with another "release". Then you can do a normal search against your indexed data.
Assuming you have a lookup containing three columns (index, host, sourceid) so that you can have multiple index/host pairs matching a single sourceid and you want to find situations where none of the... See more...
Assuming you have a lookup containing three columns (index, host, sourceid) so that you can have multiple index/host pairs matching a single sourceid and you want to find situations where none of the index/host pairs for a given sourceid report indexed events, you can do it like this: Count the events you have (preferably with tstats if you can) | tstats count where <your conditions> by index host Now you want to append your table | inputlookup append=t yourlookup | fillnull count value=0 So you have to do check the overall count | stats sum(count) as count by index host sourceid This is not much different from your "single source check". But as you want to have it checked against a "multisourced" id. So do | eventstats sum(count) as combined_count by sourceid This will give you additional field containing a combined count of events across all index/host pairs for a given sourceid. So the ones you're interested in are those which didn't have any events in any of those index/host pairs | where combined_count=0  
The high-level idea is sound - use the common contact point to get data from some isolated network segment. The issue here is the low-level design. We don't know what solution you're talking about, ... See more...
The high-level idea is sound - use the common contact point to get data from some isolated network segment. The issue here is the low-level design. We don't know what solution you're talking about, we don't know how data is being processed there, how/where the logs are stored and forwarded. There are different possible approaches depending on how it all works (syslog? REST API? whatever?)
In a small environment (especially lab one) you can sometimes combine several roles into one server and HF as such is nothing more than just a Splunk Enterprise instance with forwarding enabled (actu... See more...
In a small environment (especially lab one) you can sometimes combine several roles into one server and HF as such is nothing more than just a Splunk Enterprise instance with forwarding enabled (actually you could argue that any component not being UF and not doing local indexing is a HF). So this setup (a DS doing also HF work) should work. In this setup you should have: 1) On your indexer(s) - inputs.conf creating input for s2s from your HF (that's kinda obvious) 2) On your HF/DS - inputs.conf, outputs.conf (again - obvious stuff), serverclass.conf 3) On your UF/client HF - deploymentclient.conf pointing to your HF/DS instance You also need to take into account that some things changed in 9.2. So if you upgraded to 9.2, see https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers
Yep. While the local processing on UF part is supposed to work _somehow_ it's indeed not very well docummented and not recommended. If you want only some sources masked, you could use source-based o... See more...
Yep. While the local processing on UF part is supposed to work _somehow_ it's indeed not very well docummented and not recommended. If you want only some sources masked, you could use source-based or host-based stanzas in props.conf on your indexer(s) to selectively apply your SEDCMD or transform only to specific part of your data.
The usual debugging steps apply: 1) Check if the receiving side is listening on the port (use netstat to list open ports and verify if 8088 is among them). 2) Check the network connectivity from th... See more...
The usual debugging steps apply: 1) Check if the receiving side is listening on the port (use netstat to list open ports and verify if 8088 is among them). 2) Check the network connectivity from the client 3) Verify firewall rules 4) If needed, run tcpdump/wireshark on the server and see if any traffic from the client is reaching the server at all. When you can connect to your HEC service port you can start debugging the token settings.
Could you outline the exact steps you took? There are a lot of IDs (Tenant ID, Client ID, Secret ID, etc), so make sure you enter the correct ones in the app configuration. (Tenant ID, Application/Cl... See more...
Could you outline the exact steps you took? There are a lot of IDs (Tenant ID, Client ID, Secret ID, etc), so make sure you enter the correct ones in the app configuration. (Tenant ID, Application/Client ID, and the Secret itself)
Hi @AlirezaGhanavat, at first, which user are you using to install the UF? has it the grants to install an app? have you an antivirus? Anyway, in these cases I always open a case to Splunk Support... See more...
Hi @AlirezaGhanavat, at first, which user are you using to install the UF? has it the grants to install an app? have you an antivirus? Anyway, in these cases I always open a case to Splunk Support. Ciao. Giuseppe