All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thats odd @JoshuaJJ  - Please can you post the logs that ExecProcessor spits out so I can try and replicate this? Thanks
We are currently  using Splunk Enterprise Version: 9.3.2, not latest  I am testing with other svgs  font families in the xml and such but the result is the same :   I will communicate with... See more...
We are currently  using Splunk Enterprise Version: 9.3.2, not latest  I am testing with other svgs  font families in the xml and such but the result is the same :   I will communicate with the team for upgrade request if you are certain this feature was hotfixed.
Hi @a1bg503461  Please can you confirm which version of Splunk you are seeing the issue with? Are you simply uploading the SVG into an image in Dashboard Studio?  When I use the SVG you posted as a... See more...
Hi @a1bg503461  Please can you confirm which version of Splunk you are seeing the issue with? Are you simply uploading the SVG into an image in Dashboard Studio?  When I use the SVG you posted as a file in a dashboard it displays correctly for me (on 9.4.1) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Token modifiers can be found here https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/tokens#Syntax_to_consume_tokens I don't use Studio very much as it doesn't have all the features availa... See more...
Token modifiers can be found here https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/tokens#Syntax_to_consume_tokens I don't use Studio very much as it doesn't have all the features available to Classic Simple XML dashboards, so I can't guarantee that this will work for Studio.
@ITWhisperer How do use this token  in dashboard studio drilldown URL? any modifiers available to dynamically add UTF encoding for space and special characters?
Good morning, thanks for your reply! File already has execute
When uploading an SVG image to Splunk Dashboard Studio the characters for German umlauts are not displaying correct svg file in browser : svg within the dashboard:   Can we enclude the ... See more...
When uploading an SVG image to Splunk Dashboard Studio the characters for German umlauts are not displaying correct svg file in browser : svg within the dashboard:   Can we enclude the UTF-8 Encoding within the source code ? <svg version="2.0" encoding="utf-8" width="300" height="200" xmlns="http://www.w3.org/2000/svg"> <rect width="100%" height="100%" fill="red" /> <circle cx="150" cy="100" r="80" fill="green" /> <text x="150" y="125" font-size="60" text-anchor="middle" fill="white"> vowels a, o and u to make ä, ö, and ü. schön (beautiful) and Vögel (birds, plural form) </text> </svg> { "type": "splunk.choropleth.svg", "options": { "svg": "splunk-enterprise-kvstore://67d185960ede0c052b05390c" }, "context": {}, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }    
Hi @MU2DOD  It looks like you're experiencing an issue which first started in ES8. Check out https://splunk.my.site.com/customer/s/article/Mission-control-8-0-fails-to-assign for more detailed info,... See more...
Hi @MU2DOD  It looks like you're experiencing an issue which first started in ES8. Check out https://splunk.my.site.com/customer/s/article/Mission-control-8-0-fails-to-assign for more detailed info, however I believe the following should fix the issue for you: ensure that FQDN instead of ServerName is set in server.conf in the whole environment do that step if splunkd logs, reference hostnames (names without domain names, meaning non-FQDN) over HTTPS set sslVerifyServerCert and sslVerifyServerName to true in all instances then restart the whole Splunk Environment where changes have been made push the bundle from the deployer to the SHC members Once that is done, then in Mission Control, manually add Investigation Types (which previously wasn't working) then set the newly added type as the default then editing notable events, adding custom fields, and other should work Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @mchoudhary  Judging by the app which the search is running in it is likely that this is an accelerated datamodel generation search which is using a lot of memory.  The easiest way to see what t... See more...
Hi @mchoudhary  Judging by the app which the search is running in it is likely that this is an accelerated datamodel generation search which is using a lot of memory.  The easiest way to see what the search is would be to run something like this, updating the search_id value with the search ID of the search you wish to investigate. Note: The single quotes are required when I run this search due to how the search_id is parsed, so worth keeping those in. index=_audit search_id='1741783414.663348' info=granted search=* This will give info on the search, the user, the app, the provenance (Which is usually the dashboard name if its run from within a dashboard) and a bunch of other info such as start/end times: Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
question - why wasn't the data sent directly to the PTA server from the Windows servers via outputs.conf? 
Hi @pacifiquen  If there has been a period of time where the license wasnt valid and was not a non-enforcement license then it may be blocked. Does it give any warning about being over the licensed ... See more...
Hi @pacifiquen  If there has been a period of time where the license wasnt valid and was not a non-enforcement license then it may be blocked. Does it give any warning about being over the licensed limit 5 times? What is the exact error? Either way, it sounds likely that you will need a reset license code, this can be supplied by Splunk Support and/or your Splunk account manager/team and will need to be applied to your account in order to remove the limitation. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hello Team,Could you please assist me with resolving the issue of not seeing logs in SH after applying a new license? Additionally, since the Splunk license expired 5 months ago, could you kindly adv... See more...
Hello Team,Could you please assist me with resolving the issue of not seeing logs in SH after applying a new license? Additionally, since the Splunk license expired 5 months ago, could you kindly advise on the steps to fix this?   Additional information, before I often use 120gb/day and now I use 20gb/day. 
@mchoudhary    The search ID (SID) is your key to finding the actual search. Splunk logs this information in the _audit or _internal indexes.    Replace <your_search_id> with one of the SIDs fr... See more...
@mchoudhary    The search ID (SID) is your key to finding the actual search. Splunk logs this information in the _audit or _internal indexes.    Replace <your_search_id> with one of the SIDs from the CMC (e.g., search_id="1741781823.13254")     Since you’re on Splunk Cloud, you don’t have direct access to limits.conf or server-level configs to set memory thresholds (e.g., search_process_memory_usage_threshold). If the issue persists after optimization or if the CMC isn’t giving enough detail, open a support ticket with Splunk. Provide: The SIDs from the CMC. The searches you identified. A screenshot of the CMC panel (since you referenced an image). They can check backend logs or adjust search concurrency/memory limits for you.
@mchoudhary  https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Using_the_Splunk_Cloud_Monitoring_Console_effectively 
@mchoudhary  You can use the introspection search to find out the high memory consuming searches index=_introspection sourcetype=splunk_resource_usage data.search_props.sid::* data.search_props.mod... See more...
@mchoudhary  You can use the introspection search to find out the high memory consuming searches index=_introspection sourcetype=splunk_resource_usage data.search_props.sid::* data.search_props.mode!=RT data.search_props.user!="splunk-system-user" | eval process = 'data.process' | eval args = 'data.args' | eval pid = 'data.pid' | eval ppid = 'data.ppid' | eval elapsed = 'data.elapsed' | eval mem_used = 'data.mem_used' | eval mem = 'data.mem' | eval pct_memory = 'data.pct_memory' | eval pct_cpu = 'data.pct_cpu' | eval sid = 'data.search_props.sid' | eval app = 'data.search_props.app' | eval label = 'data.search_props.label' | eval type = 'data.search_props.type' | eval mode = 'data.search_props.mode' | eval user = 'data.search_props.user' | eval role = 'data.search_props.role' | eval label = if(isnotnull('data.search_props.label'), 'data.search_props.label', "") | eval provenance = if(isnotnull('data.search_props.provenance'), 'data.search_props.provenance', "unknown") | eval search_head = case(isnotnull('data.search_props.search_head') AND 'data.search_props.role' == "peer", 'data.search_props.search_head', isnull('data.search_props.search_head') AND 'data.search_props.role' == "head", "_self", isnull('data.search_props.search_head') AND 'data.search_props.role' == "peer", "_unknown") | eval search_label = if('label'!="", 'label', 'sid') | eval instance = if(isnotnull(dns_alt_name), dns_alt_name, host) | stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as _time by search_label, provenance, type, mode, app, role, user, instance | eval mem_used = round(mem_used, 2) | sort 20 - mem_used, runtime | eval runtime = tostring(round(runtime, 2), "duration") | fields search_label, provenance, mem_used, instance, runtime, _time, type, mode, app, user, role | eval _time=strftime(_time,"%+") | rename search_label as Name, provenance as Provenance, mem_used as "Memory Usage (KB)", instance as Instance, runtime as "Search Duration", _time as Started, type as Type, mode as Mode, app as App, user as User, role as Role | appendpipe [ stats count | eval Name="data unavailable" | where count==0 | table Name ]    
Lately, my CMC is indicating that there are 20 searches exceeding 10% of system memory. But when I click on it I don't see which searches are listed as high memory searches. All it gives is search id... See more...
Lately, my CMC is indicating that there are 20 searches exceeding 10% of system memory. But when I click on it I don't see which searches are listed as high memory searches. All it gives is search id, app, memory used and status. (refer the image below) Could anyone please suggest me how to troubleshoot this and find which searches are under High memory searches.  
@hummingbird81  I tested this using makeresults with dummy data. Copy and paste this query into your Splunk search bar to run it. It doesn’t depend on your actual index or CSV, so it's safe for test... See more...
@hummingbird81  I tested this using makeresults with dummy data. Copy and paste this query into your Splunk search bar to run it. It doesn’t depend on your actual index or CSV, so it's safe for testing. Dummy data:-  | makeresults | eval _time=strptime("2025-03-01T12:00:00.00+05:30", "%Y-%m-%dT%H:%M:%S.%2Q%z"), user_id="001", Name="John Smith", title="Consultant", email="john.smith@example.com", start_Date="2021-06-28T23:59:59.00+05:30", end_Date="2025-06-28T23:59:59.00+05:30", source="okta", mod_time=_time | fields user_id, Name, title, email, start_Date, end_Date, mod_time, source | append [ | makeresults | eval _time=strptime("2022-06-01T12:00:00.00+05:30", "%Y-%m-%dT%H:%M:%S.%2Q%z"), user_id="001", Name="John Smith", title="Administrator", email="john.smith@example.com", start_Date="2021-06-28T23:59:59.00+05:30", end_Date="2022-06-28T23:59:59.00+05:30", source="csv", mod_time=if(isnull(_time), strptime(end_Date, "%Y-%m-%dT%H:%M:%S.%2Q%z"), _time) | fields user_id, Name, title, email, start_Date, end_Date, mod_time, source ] | sort 0 -mod_time | dedup user_id | table Name, title, start_Date, end_Date, user_id     You can try this:- index=okta | eval source="okta", mod_time=_time | fields user_id, Name, title, email, start_Date, end_Date, mod_time, source | append [ | inputlookup identities.csv | eval source="csv", mod_time=if(isnull(_time), strptime(end_Date, "%Y-%m-%dT%H:%M:%S.%2Q%z"), _time) | fields user_id, Name, title, email, start_Date, end_Date, mod_time, source ] | sort 0 -mod_time /* Sort by mod_time descending to prioritize latest */ | dedup user_id /* Keep only the first (latest) record per user_id */ | table Name, title, start_Date, end_Date, user_id  
Hi All, looking for some advice as in how to take the latest values from 2 datasets .  We have a base search that pulls user details like name, start_date, end_date, title, location etc from an index... See more...
Hi All, looking for some advice as in how to take the latest values from 2 datasets .  We have a base search that pulls user details like name, start_date, end_date, title, location etc from an index =okta.  name start_date end_date title user_id John Smith 2021-06-28T23:59:59.00+05:30 2025-06-28T23:59:59.00+05:30 Consultant 001 The above index has the most current data of a user.  Next we have another master lookup file (identities.csv)  where we maintain all user details from past few years.   This master lookup also contains same fields as the above index.  For example: name start_date end_date title user_id John Smith 2021-06-28T23:59:59.00+05:30 2022-06-28T23:59:59.00+05:30 Administrator 001 Notice the end _date and title are different in the lookup. Below is our current search that compares the 2 datasets.  We want it to update the date fields or any other field  whichever is the latest  but at the moment it does NOT update the fields even if any field like end_date or title is modified under index.   index=okta stats latest(_time) as _time, values(profile.title) as title, values(profile.email) as email, values(profile.startDate) as start_Date,values(profile.endDate) as end_Date, values(profile.Name) as Name by user_id | append [|inputlookup identities.csv] |stats latest(_time) as _time, latest(profile.title) as title, latest(profile.email) as email, latest(profile.startDate) as start_Date,latest(profile.endDate) as end_Date, latest(profile.Name) as Name by user_id | table Name title start_date end_date user_id  Running the above query still shows the old info which has the old end_date and title  even though i am using |stats latest()  .   Pls advise how to retrieve the latest be it date format or be it string format which is "title" name start_date end_date title John Smith 2021-06-28T23:59:59.00+05:30 2022-06-28T23:59:59.00+05:30 Administrator
Greetings. We are currently using Splunk ES (on-prem) 7.3.3, I updated Splunk to version 9.4.1. Since the upgrade we're unable to edit ES findings. For instance If i try to edit a a finding so it ca... See more...
Greetings. We are currently using Splunk ES (on-prem) 7.3.3, I updated Splunk to version 9.4.1. Since the upgrade we're unable to edit ES findings. For instance If i try to edit a a finding so it can be reassigned to someone, or closed. I receive the following error pop-up:  "Failure Failed to update finding: Cannot redirect an already redirected call"   I haven't been able to locate any resources that maybe able to help point in the right directions. Any help would be appreciated. 
@doli  1. Go to the add-on and configure  *Account Name: Enter a unique name for this account. *IP Address/Domain : Enter the IP Address of the Cisco Cyber Vision in format https://<ip address> or... See more...
@doli  1. Go to the add-on and configure  *Account Name: Enter a unique name for this account. *IP Address/Domain : Enter the IP Address of the Cisco Cyber Vision in format https://<ip address> or https://<domain-name>  API Token : Enter API Token generated from Cyber Vision for above account. If you have proxy, configure the proxy details.    2. Create input  Navigate to the inputs section and create a new input based on your requirements. Note:  Create an index for this data source to store incoming events. Check and open the necessary firewall ports/rules for data ingestion. Ensure communication between the data source and Splunk components. If events are not visible after configuration, check the internal index (_internal) For Splunk Clusters:  Create the index on the Cluster Master (CM) and push it to the indexers. Also, create the same index on the Heavy Forwarder (HF).