All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The question seems not to provide all the information (or at least I am unable to understand). Would you please elaborate more?
@splunklearner I have standalone server, so you can try this settings on your heavy forwarder or indexers. 
@splunklearner   
@splunklearner I tried this using your sample data; please have a look.    [syslogtest] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 category=Cust... See more...
@splunklearner I tried this using your sample data; please have a look.    [syslogtest] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 category=Custom pulldown_type=true SEDCMD-removeheader=s/^[^\{]*//g KV_MODE=json AUTO_KV_JSON=true  
Hello all, Currently we have following event which contains both json and non json data. Please help me in removing this non-json part and where I need to give indexed_extractuons or KV_mode effecti... See more...
Hello all, Currently we have following event which contains both json and non json data. Please help me in removing this non-json part and where I need to give indexed_extractuons or KV_mode effectively to auto extract all json fields. Nov 9 17:34:28 128.160.82.28 [local0.warning] <132>1 2024-11-09T17:34:28.436542Z AviVantage v-epswafhic2-wdc.hc.cloud.uk.hc-443 NILVALUE NILVALUE - {"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-4583863f-48a3-42b9-8115-252a7fb487f5","report_timestamp":"2024-11-09T17:34:28.436542Z","service_engine":"GB-DRN-AB-Tier2-se-vxeuz","vcpu_id":0,"log_id":10181,"client_ip":"128.12.73.92","client_src_port":44908,"client_dest_port":443,"client_rtt":1,"http_version":"1.1","method":"HEAD","uri_path":"/path/to/monitor/page/","host":"udg1704n01.hc.cloud.uk.hc","response_content_type":"text/html","request_length":93,"response_length":94,"response_code":400,"response_time_first_byte":1,"response_time_last_byte":1,"compression_percentage":0,"compression":"","client_insights":"","request_headers":3,"response_headers":12,"request_state":"AVI_HTTP_REQUEST_STATE_READ_CLIENT_REQ_HDR","significant_log":["ADF_HTTP_BAD_REQUEST_PLAIN_HTTP_REQUEST_SENT_ON_HTTPS_PORT","ADF_RESPONSE_CODE_4XX"],"vs_ip":"128.160.71.14","request_id":"61e-RDl6-OZgZ","max_ingress_latency_fe":0,"avg_ingress_latency_fe":0,"conn_est_time_fe":1,"source_ip":"128.12.73.92","vs_name":"v-epswafhic2-wdc.hc.cloud.uk.hc-443","tenant_name":"admin"} And where I need to give these configurations?  We have syslog servers with UF installed and that send data to our deployment server. DS will push apps to master and deployer from there pushing will be done.  As of now we have props.conf in master which will push to indexers.
Hi @erick4x4 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
@azer271 Check the internal logs: index=_internal *sentinelone*
@azer271  To verify, you can test the API connection by using Postman or curl curl -X GET "https://xxx-xxx-xxx.sentinelone.net/web/api/v2.1/info" -H "Authorization: APIToken" If you get a successf... See more...
@azer271  To verify, you can test the API connection by using Postman or curl curl -X GET "https://xxx-xxx-xxx.sentinelone.net/web/api/v2.1/info" -H "Authorization: APIToken" If you get a successful response, the API token is valid. If logs are missing, check API permissions,  and any firewall restrictions.
Try not using the special characters - . in Names ?
Is there any solution to this? We are encountering the similar issue. Everything looks fine, we used latest agent now but not reporting / not generating the logs as well.
Here's a straightforward hack that uses a zero width space as a padded value prefix to determine a cell's status. For example, a status of Unknown is one zero width space. The SPL uses the urldecode(... See more...
Here's a straightforward hack that uses a zero width space as a padded value prefix to determine a cell's status. For example, a status of Unknown is one zero width space. The SPL uses the urldecode() eval function to convert URL-encoded UTF-8 characters to strings. <table id="table2"> <search> <query>| makeresults format=csv data=" _time,HOSTNAME,PROJECTNAME,JOBNAME,INVOCATIONID,RUNSTARTTIMESTAMP,RUNENDTIMESTAMP,RUNMAJORSTATUS,RUNMINORSTATUS,RUNTYPENAME 2025-01-20 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-19 20:18:25.0,,STA,RUN,Run 2025-01-19 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-18 20:18:25.0,2025-01-18 20:18:29.0,FIN,FWF,Run 2025-01-18 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-17 20:18:25.0,2025-01-17 20:18:29.0,FIN,FOK,Run 2025-01-17 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-16 20:18:25.0,2025-01-16 20:18:29.0,FIN,FWW,Run 2025-01-16 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-15 20:18:25.0,2025-01-15 20:18:29.0,FIN,HUH,Run " ``` use zero width space as pad ``` | eval status_unknown=urldecode("%E2%80%8B") | eval status_success=urldecode("%E2%80%8B%E2%80%8B") | eval status_failure=urldecode("%E2%80%8B%E2%80%8B%E2%80%8B") | eval status_warning=urldecode("%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B") | eval status_running=urldecode("%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B") | eval _time=strptime(_time, "%Y-%m-%d %H:%M:%S.%Q") | search PROJECTNAME="*" INVOCATIONID="*" RUNMAJORSTATUS="*" RUNMINORSTATUS="*" | eval status=case(RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW", status_warning, RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK", status_success, RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWF", status_failure, RUNMAJORSTATUS="STA" AND RUNMINORSTATUS="RUN", status_running, 1=1, status_unknown) | eval tmp=JOBNAME."|".INVOCATIONID | eval date=strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%Y-%m-%d") | eval value=status.if(status==status_unknown, "Unknown", "start time: ".coalesce(strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), "").urldecode("%0a").if(status==status_running, "Running", "end time: ".coalesce(strftime(strptime(RUNENDTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), ""))) | xyseries tmp date value | eval tmp=split(tmp, "|"), Job=mvindex(tmp, 0), Country=mvindex(tmp, 1) | fields - tmp | table Job Country *</query> </search> <option name="drilldown">none</option> <option name="wrap">true</option> <format type="color"> <colorPalette type="expression">case(match(value, "^\\u200b{1}[^\\u200b]"), "#D3D3D3", match(value, "^\\u200b{2}[^\\u200b]"), "#90EE90", match(value, "^\\u200b{3}[^\\u200b]"), "#F0807F", match(value, "^\\u200b{4}[^\\u200b]"), "#FEEB3C", match(value, "^\\u200b{5}[^\\u200b]"), "#ADD9E6")</colorPalette> </format> </table>  
Hi. I am new to Splunk and SentinelOne. Here is what I've done so far: I need to forward logs from SentinelOne to a single Splunk instance. Since it is a single instance, I installed the Splunk CIM ... See more...
Hi. I am new to Splunk and SentinelOne. Here is what I've done so far: I need to forward logs from SentinelOne to a single Splunk instance. Since it is a single instance, I installed the Splunk CIM Add-on and the SentinelOne App. (which is mentioned in the Installation of the app. https://splunkbase.splunk.com/app/5433 ) In the SentinelOne App of the Splunk instance, I changed the search index to sentinelone in Application Configuration. I already created the index for testing purpose. In the API configuration, I added the url which is xxx-xxx-xxx.sentinelone.net and the api token. It is generated by adding a new service user in SentinelOne and clicking generate API token. The scope is global. I am not sure if its the correct API token. Moreover, I am not sure which channel I need to pick in SentinelOne inputs in Application Configuration(SentineOne App), such as Agents/Activities/Applications etc. How do I know which channel do i need to forward or i just add all channels? Clicking the application health overview, there is no data ingest of items. Using this SPL index=_internal sourcetype="sentinelone*" sourcetype="sentinelone:modularinput" does not show any action=saving_checkpoint, which means no data. Any help/documentation for the setup would be helpful. I would like to know the reason for no data and how to fix it. Thank you.
Gcusello, This is exactly what's going on. That log file is updated frequently but its by a script which 99% of the time writes the identical output (when it doesn't detect any problems). That means... See more...
Gcusello, This is exactly what's going on. That log file is updated frequently but its by a script which 99% of the time writes the identical output (when it doesn't detect any problems). That means Windows shows the file has a new update timestamp, but the file hash doesn't actually change. I'll edit my script to put a dynamic timestamp in the file or something to make the content change so the Splunk Forwarder sends the changes. Thank you so much!
Since you apparently did a local connectivity test and it succeeded, there must be something external to Splunk itself preventing you from connecting. Your iptables rules seem to not be interfering (... See more...
Since you apparently did a local connectivity test and it succeeded, there must be something external to Splunk itself preventing you from connecting. Your iptables rules seem to not be interfering (you don't have port 8000 explicitly open but the general policy is ACCEPT). So it points to something network-related. Routing? Filtering on some intermediate device? It's something best solved with your local admin staff since it doesn't seem to be related to Splunk as such.
When using this package in Jupyter Notebook, I'm using Python to apply different models to the data based on whether it's during working hours or not. Although I'm using autoencoder as the main archi... See more...
When using this package in Jupyter Notebook, I'm using Python to apply different models to the data based on whether it's during working hours or not. Although I'm using autoencoder as the main architectural framework, I'm taking this approach because the data follows different distributions under these two scenarios." When using this package in Jupyter Notebook, I'm using Python to apply different models to the data based on whether it's during working hours or not. Although I'm using autoencoder as the main architectural framework, I'm taking this approach because the data follows different distributions under these two scenarios." Are there any other better approaches
It looks like port 8000 is already open on the host firewall (I believe "irdmi" referenced is the service name for  port 8000 on RHEL), so it sounds like the host itself should be allowing connectivi... See more...
It looks like port 8000 is already open on the host firewall (I believe "irdmi" referenced is the service name for  port 8000 on RHEL), so it sounds like the host itself should be allowing connectivity. Nevertheless, you could try explicitly allowing port 8000 and checking the logs: Open Port 8000 in the Firewall: sudo firewall-cmd --zone=public --add-port=8000/tcp --permanent sudo firewall-cmd --reload   Verify with: sudo firewall-cmd --list-all   Check Splunk logs for any errors: $SPLUNK_HOME/var/log/splunk/web_service.log $SPLUNK_HOME/var/log/splunk/splunkd.log Have you been able to confirm that no network changes were made around the time?   
Hi @wdhaar  Once you have downloaded the Cisco Security Cloud splunk app (cisco-security-cloud_301.tgz) you need to install the app onto your existing Splunk instance. The method for doing this dep... See more...
Hi @wdhaar  Once you have downloaded the Cisco Security Cloud splunk app (cisco-security-cloud_301.tgz) you need to install the app onto your existing Splunk instance. The method for doing this depends on your setup: Single server instance: https://docs.splunk.com/Documentation/AddOns/released/Overview/Singleserverinstall Distributed environment: https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall If you are using Splunk Cloud then you actually do not need to download from Splunkbase - instead you can install it via the App Browser in Splunk Cloud.  Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Here’s a sample curl request to create a muting rule in the Splunk Observability Suite using the provided API reference: curl -X POST "https://api.us0.signalfx.com/v2/incidents/muting-rules" \ ... See more...
Here’s a sample curl request to create a muting rule in the Splunk Observability Suite using the provided API reference: curl -X POST "https://api.us0.signalfx.com/v2/incidents/muting-rules" \ -H "Content-Type: application/json" \ -H "X-SF-TOKEN: YOUR_ACCESS_TOKEN" \ -d '{ "filter": { "severity": "Warning", "incidentType": "SIGNAL", "tags": { "environment": ["prod"], "team": ["infra"] } }, "reason": "Scheduled maintenance", "startTime": 1672531200000, "endTime": 1672617600000, "enabled": true }' Explanation: URL: The API endpoint to create muting rules. Headers: Content-Type: application/json: Specifies JSON payload. X-SF-TOKEN: Your Splunk Observability API token. Payload: filter: Defines what incidents to mute based on severity, type, and tags. reason: Explanation for the muting rule (e.g., scheduled maintenance). startTime and endTime: Unix epoch time (in milliseconds) specifying when the rule will be active. enabled: Boolean to activate the muting rule immediately. Replace YOUR_ACCESS_TOKEN and customize the payload as needed for your setup. Refer to the Splunk Observability API docs for further customization options. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hi @cdavidsonbp  The content packs might be helpful if you're running ITSI/ITE Work but you will still need to look at collecting the data. The Windows TA you referenced is a great starting point as... See more...
Hi @cdavidsonbp  The content packs might be helpful if you're running ITSI/ITE Work but you will still need to look at collecting the data. The Windows TA you referenced is a great starting point as it can collect AD events and win event logs that should help create the info you need. Have a look at these docs on AD Audit policy configuration, the docs are for the older exchange app but this functionality is now in the Add-on for Windows. https://docs.splunk.com/Documentation/MSExchange/4.0.4/DeployMSX/ConfigureActiveDirectoryauditpolicy Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will