All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I see that you are running splunk on windows? I haven’t so much experience how window’s internals works in current versions, but are you sure that splunk can use all that added memory without additi... See more...
I see that you are running splunk on windows? I haven’t so much experience how window’s internals works in current versions, but are you sure that splunk can use all that added memory without additional configuration? E.g. in Linux you must run at least disable boot-start and re-enable it again. Otherwise systemd didn’t know that splunk is allowed to use that additional memory.
Hi I’m thinking this is just shortcut to test your written search and it works just like copy&paste your query into another window. If you want to fulfill your “tokens” etc into that query then you m... See more...
Hi I’m thinking this is just shortcut to test your written search and it works just like copy&paste your query into another window. If you want to fulfill your “tokens” etc into that query then you must run it from dashboard not from this edit mode. I haven’t DS on my hand to test it now, but you could do it easily by yourself. r. Ismo
It will work if i give the keyword as *init:data:invoke* runs in dashboard and also in the search of Edit search part. but if i run *name:init:data:invoke*  it runs in dashboard but not in particula... See more...
It will work if i give the keyword as *init:data:invoke* runs in dashboard and also in the search of Edit search part. but if i run *name:init:data:invoke*  it runs in dashboard but not in particular Edit search if it is  *$entityToken:init:data:invoke* not running in dashboard nor in particular Edit search
You can add base forwarding (forward all events) to target host with gui. But if/when you need so send only some events in that target and other to some another target then you must do it with conf f... See more...
You can add base forwarding (forward all events) to target host with gui. But if/when you need so send only some events in that target and other to some another target then you must do it with conf files.  Anyhow I strongly recommended you to do this kind of base configurations by apps! In that ay those are much easier to admin, especially in larger environments. Also your auditors etc. are happier when you are fulfilling their requirements.
Sorry. Still not working
I cut and pasted your script and ran the dashboard. Then clicked the link. Sorry still not working <dashboard version="1.1" theme="light"> <label>test1</label> <row> <panel> <single> <searc... See more...
I cut and pasted your script and ran the dashboard. Then clicked the link. Sorry still not working <dashboard version="1.1" theme="light"> <label>test1</label> <row> <panel> <single> <search> <query>| makeresults | eval URL="http://docs.splunk.com/Documentation" | table URL </query> </search> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="useColors">1</option> <drilldown> <set token="tokURL">$click.value$</set> </drilldown> </single> </panel> </row> <row> <panel depends="$tokURL$"> <html> <p>$tokURL$</p> <iframe src="$tokURL$" height="600" width="100%" style="border:2;"> </iframe> </html> </panel> </row> </dashboard>  
I am getting Refused to display 'https://docs.splunk.com/' in a frame because it set 'X-Frame-Options' to 'sameorigin'. even though i setup below configurations in web.conf in /opt/splunk/etc/system... See more...
I am getting Refused to display 'https://docs.splunk.com/' in a frame because it set 'X-Frame-Options' to 'sameorigin'. even though i setup below configurations in web.conf in /opt/splunk/etc/system/local dashboard_html_allow_embeddable_content = true dashboard_html_allow_iframes = true dashboard_html_allow_inline_styles = true dashboard_html_allowed_domains = *.splunk.com dashboard_html_wrap_embed = false This is the script in dashboard: <row> <panel> <html> <iframe src="https://docs.splunk.com/Documentation" width="100%" height="300">&gt;</iframe> </html> </panel> </row> Please advice. Thanks
Ok. As you created an input in that app, it should have created an instance of inputs.conf in the "local" subdirectory - /opt/splunk/etc/TA-AKAMAI_SIEM/local/inputs.conf See that file. It should co... See more...
Ok. As you created an input in that app, it should have created an instance of inputs.conf in the "local" subdirectory - /opt/splunk/etc/TA-AKAMAI_SIEM/local/inputs.conf See that file. It should contain a stanza with your particular input instance definition. There you can add your index=something setting.
@securepoint  API-to-HEC Approach   Using the Cortex XDR APIs with Splunk’s HEC is a viable path. Here’s how you could approach it:   API Access:   You’ll need an API key and key ID from Co... See more...
@securepoint  API-to-HEC Approach   Using the Cortex XDR APIs with Splunk’s HEC is a viable path. Here’s how you could approach it:   API Access:   You’ll need an API key and key ID from Cortex XDR (check the "Getting Started with Cortex XDR APIs" guide). Ensure you have the right permissions.    Relevant Endpoints:   /public_api/v1/endpoints/get_endpoints: Lists all endpoints with basic metadata (e.g., hostname, IP, OS). /public_api/v1/endpoints/get_endpoint: Detailed data for a specific endpoint (e.g., status, last seen). /public_api/v1/alerts/get_alerts_multi_events: Alert details, but you want more than this. /public_api/v1/incidents/get_incidents and /public_api/v1/incidents/get_incident_extra_data: Incident data with some context.   https://docs-cortex.paloaltonetworks.com/r/Cortex-XDR-REST-API/Get-Endpoint  https://docs-cortex.paloaltonetworks.com/r/Cortex-XDR-REST-API/Get-all-Endpoints   https://docs-cortex.paloaltonetworks.com/r/Cortex-XDR-REST-API    Raw Data: There’s no direct "get all endpoint telemetry" endpoint. You’d need to use XQL (XDR Query Language) via the /public_api/v1/xql/start_xql_query endpoint to query raw telemetry (e.g., process, network, file events).   Splunk HEC Setup Configure an HEC token in Splunk (Settings > Data Inputs > HTTP Event Collector). Ensure the endpoint is reachable (e.g., https://<splunk_host>:8088/services/collector). Data sent to HEC should be JSON-formatted, with fields like event, time, host, and source type. Scripting the Solution   You’ll need a script (e.g., in Python) to: Authenticate with the Cortex XDR API. Query endpoint data and/or XQL for raw telemetry. Format the results as JSON. Send it to Splunk HEC. Here’s a basic example script to get you started:   import requests import json import time # Cortex XDR API credentials api_key = "your_api_key" api_key_id = "your_api_key_id" fqdn = "your-tenant.xdr.us.paloaltonetworks.com" # Replace with your tenant FQDN headers = { "x-xdr-auth-id": api_key_id, "Authorization": api_key, "Content-Type": "application/json" } # Splunk HEC settings hec_url = "https://your-splunk-host:8088/services/collector" hec_token = "your_hec_token" hec_headers = {"Authorization": f"Splunk {hec_token}"} # Function to query Cortex XDR endpoints def get_all_endpoints(): url = f"https://api-{fqdn}/public_api/v1/endpoints/get_endpoints" response = requests.post(url, headers=headers, json={"request_data": {}}) if response.status_code == 200: return response.json().get("reply", {}).get("endpoints", []) else: print(f"Error: {response.status_code} - {response.text}") return [] # Function to send data to Splunk HEC def send_to_splunk(data): payload = { "event": data, "time": int(time.time()), "sourcetype": "cortex_xdr_endpoint", "host": "cortex_xdr" } response = requests.post(hec_url, headers=hec_headers, json=payload) if response.status_code == 200: print("Data sent to Splunk successfully") else: print(f"HEC Error: {response.status_code} - {response.text}") # Main logic endpoints = get_all_endpoints() for endpoint in endpoints: send_to_splunk(endpoint) time.sleep(1) # Throttle to avoid rate limits # Example XQL query for raw telemetry (adjust as needed) xql_query = { "request_data": { "query": "dataset = xdr_data | filter event_type = PROCESS | limit 100", "timeframe": {"relative": {"unit": "hour", "value": -24}}} } xql_url = f"https://api-{fqdn}/public_api/v1/xql/start_xql_query" xql_response = requests.post(xql_url, headers=headers, json=xql_query) if xql_response.status_code == 200: query_id = xql_response.json().get("reply", {}).get("query_id") # Fetch results with /get_xql_query_results (implement polling logic) # Send results to Splunk    https://pan.dev/splunk/docs/getting-data-in/  https://live.paloaltonetworks.com/t5/cortex-xdr-discussions/cortex-xdr-and-splunk/td-p/476724  https://docs.paloaltonetworks.com/strata-logging-service/administration/forward-logs/forward-logs-to-an-https-server  https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-and-Palo-Alto-Cortex-Data-Lake-Data-for-global-protect/m-p/493384   
Thanks, @iamryan for following up! I have found the solution to my problem. I just need to configure the Health-rule so that an alert is triggered if the metric value is less than 1 at any given tim... See more...
Thanks, @iamryan for following up! I have found the solution to my problem. I just need to configure the Health-rule so that an alert is triggered if the metric value is less than 1 at any given time. Since even if the task is not running, it will remain in the READY state in the Task Scheduler, with a value of 2, and for any other state, the value will be 0.
Hi @nithys , let me understand: isthe issue that clicking on Run search your search doesn't start or that it doesn't run (no results)? if the first case, please try to copy the search in a search d... See more...
Hi @nithys , let me understand: isthe issue that clicking on Run search your search doesn't start or that it doesn't run (no results)? if the first case, please try to copy the search in a search dashboard and check if it runs. if the issue is that you search doesn't run, debug it in a search dashboard. What does it happen if you run your search without the where condition? Ciao. Giuseppe
@uagraw01  Even with 64GB, an excessively large or poorly managed KV Store dataset could overwhelm mongod   Check the KV Store data size: du -sh /opt/splunk/var/lib/splunk/kvstore/mongo/    Lo... See more...
@uagraw01  Even with 64GB, an excessively large or poorly managed KV Store dataset could overwhelm mongod   Check the KV Store data size: du -sh /opt/splunk/var/lib/splunk/kvstore/mongo/    Look in collections.conf across apps ($SPLUNK_HOME/etc/apps/*/local/) to identify what’s stored.   Query collection sizes via Splunk REST API:   | rest /servicesNS/-/-/storage/collections/data/<collection_name> | stats count https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usetherestapitomanagekv/    Is the KV Store directory getting too large (e.g., 20GB+)? Any single collection with millions of records or huge documents?   If a collection is oversized, archive or purge old data (e.g., ./splunk clean kvstore --collection <name> after backing up). Optimize apps to store less in KV Store (e.g., reduce field counts or batch updates).    
@uagraw01  Is mongod using a small fraction of the 64GB (e.g., stuck at 4GB or 8GB) before crashing? Any ulimit restrictions?   If capped, increase the ulimit (e.g., edit /etc/security/limi... See more...
@uagraw01  Is mongod using a small fraction of the 64GB (e.g., stuck at 4GB or 8GB) before crashing? Any ulimit restrictions?   If capped, increase the ulimit (e.g., edit /etc/security/limits.conf to set splunk - memlock unlimited and reboot or reapply). MongoDB (used by KV Store) typically uses up to 50% of system RAM minus 1GB for its working set by default. With 64GB, it should have ~31GB available, ensure it’s not artificially limited.  Open $SPLUNK_HOME/var/log/splunk/mongod.log and look for the [ReplBatcher] out of memory error. Note the timestamp and surrounding lines.  Cross-check $SPLUNK_HOME/var/log/splunk/splunkd.log for KV Store restart attempts or related errors. The [ReplBatcher] component handles replication in KV Store, and an "out of memory" error here suggests it’s choking on the replication workload. With 64GB, it shouldn’t be a hardware limit, so tune the configuration.   Check server.conf ($SPLUNK_HOME/etc/system/local/server.conf)   [kvstore] oplogSize = <current value>  oplogSize = <integer> * The size of the replication operation log, in megabytes, for environments with search head clustering or search head pooling. In a standalone environment, 20% of this size is used. * After the KV Store has created the oplog for the first time, changing this setting does NOT affect the size of the oplog. A full backup and restart of the KV Store is required. * Do not change this setting without first consulting with Splunk Support. * Default: 1000 (1GB) Default is 1000 MB (1GB). Post-RAM upgrade, this might be too small for your data throughput.    https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf?_gl=1*homgau*_ga*NzI2Njg4NjMzLjE2NTUzOTI4OTQ.*_gid*MTIyOTUyNTY3Mi4xNjU1MzkyODk0&_ga=2.206629523.1229525672.1655392894-726688633.1655392894    Run ./splunk show kvstore-status to see replication lag or errors.   Restart Splunk (./splunk restart) and monitor if crashes decrease. A larger oplog gives replication more buffer space, reducing memory pressure.
Ha @uagraw01 you caught me at a good time   Sounds like RAM shouldnt really be an issue then, although it is possible to adjust how much memory mongo can use with server.conf/[kvstore]/percRAMForC... See more...
Ha @uagraw01 you caught me at a good time   Sounds like RAM shouldnt really be an issue then, although it is possible to adjust how much memory mongo can use with server.conf/[kvstore]/percRAMForCache (See https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf?_gl=1*homgau*_ga*NzI2Njg4NjMzLjE2NTUzOTI4OTQ.*_gid*MTIyOTUyNTY3Mi4xNjU1MzkyODk0&_ga=2.206629523.1229525672.1655392894-726688633.1655392894#:~:text=splunkd_stop_timeout%27.%0A*%20Default%3A%20false-,percRAMForCache,-%3D%20%3Cpositive%20integer%3E%0A*%20The) You could adjust this and see if this resolves the issue, Its 15% by default. The other thing I was wondering is if there are any high memory operations against KVStore being done when it crashes that might be causing more-than-usual memory usage? Are you using DB Connect on the server, or are any certain modular inputs executing at the time of the issue? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Will
Ha @uagraw01 you caught me at a good time   Sounds like RAM shouldnt really be an issue then, although it is possible to adjust how much memory mongo can use with server.conf/[kvstore]/percRAMForC... See more...
Ha @uagraw01 you caught me at a good time   Sounds like RAM shouldnt really be an issue then, although it is possible to adjust how much memory mongo can use with server.conf/[kvstore]/percRAMForCache (See https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf?_gl=1*homgau*_ga*NzI2Njg4NjMzLjE2NTUzOTI4OTQ.*_gid*MTIyOTUyNTY3Mi4xNjU1MzkyODk0&_ga=2.206629523.1229525672.1655392894-726688633.1655392894#:~:text=splunkd_stop_timeout%27.%0A*%20Default%3A%20false-,percRAMForCache,-%3D%20%3Cpositive%20integer%3E%0A*%20The) You could adjust this and see if this resolves the issue, Its 15% by default. The other thing I was wondering is if there are any high memory operations against KVStore being done when it crashes that might be causing more-than-usual memory usage? Are you using DB Connect on the server, or are any certain modular inputs executing at the time of the issue? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@uagraw01  Upgrading from 32GB to 64GB RAM means memory is no longer the main issue. But since the [ReplBatcher] out of memory error is still happening, the problem is likely elsewhere.    Check m... See more...
@uagraw01  Upgrading from 32GB to 64GB RAM means memory is no longer the main issue. But since the [ReplBatcher] out of memory error is still happening, the problem is likely elsewhere.    Check mongod memory usage during a crash: On Linux, run top or htop and sort by memory (RES column) to see how much mongod is consuming.  Confirm no OS-level limits are capping it: Check ulimit -v (virtual memory) for the Splunk user. It should be unlimited or very high.
This is what is present in HF under /opt/splunk/etc/TA-AKAMAI_SIEM/default/inputs.conf [TA-AKAMAI_SIEM] index = default sourcetype=akamaisiem interval = 60 We have props and transforms as well i... See more...
This is what is present in HF under /opt/splunk/etc/TA-AKAMAI_SIEM/default/inputs.conf [TA-AKAMAI_SIEM] index = default sourcetype=akamaisiem interval = 60 We have props and transforms as well in this app by default.
The "monitor" stanza is only for reading local files. This TA's input will use something else. Unfortunately, it's apparently not very well written (java? duh...; also no CIM-compatibility at all) an... See more...
The "monitor" stanza is only for reading local files. This TA's input will use something else. Unfortunately, it's apparently not very well written (java? duh...; also no CIM-compatibility at all) and underdocummented so you have to either reach out to the author or download the app, unpack it and browse inside.
Where and how I need to configure inputs.conf for my data inputs? I don't have any log path to give monitor stanza in inputs.conf. what to give exactly in inputs.conf? https://splunkbase.splunk.com/... See more...
Where and how I need to configure inputs.conf for my data inputs? I don't have any log path to give monitor stanza in inputs.conf. what to give exactly in inputs.conf? https://splunkbase.splunk.com/app/4310 This is the app I am trying to install...
Hey Will, @livehybrid, you’re even faster than GPT! We've already upgraded our RAM from 32GB to 64GB.