All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers, I'm sharing a temporary solution for "A custom JavaScript error caused an issue loading your dashboard"  popup message when your dashboard has any console errors.       ... See more...
Hello Splunkers, I'm sharing a temporary solution for "A custom JavaScript error caused an issue loading your dashboard"  popup message when your dashboard has any console errors.         Basically, this error message indicates that there is some javascript error during the execution of the script and you can easily check by doing inspect as well.  Before applying this solution I suggest identifying the javascript error and resolve in case there is scope.  Bcoz it may impact the logic.  If you are trying to resolve this issue and taking time then this temporary solution is good for you   This is a Javascript-based solution, which overrides the popup container and makes it empty whenever it tries to populate with the error message.  You can put this JS Code in your dashboard's Custom JS file or create the common file and use it in multiple dashboards.       require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc) { console.log($('[data-test="layer-container"]')); $('[data-test="layer-container"]').on('DOMSubtreeModified', function(){ console.log('changed two'); $('[data-test="layer-container"]').empty(); }); $('[data-test="layer-container"]').empty(); });           I have tried this with Splunk Enterprise Version: 9.0.2 Build: 17e00c557dc1. In case you find any difficulties please let us know.   I hope this will help you. Happy Splunking Thanks KV If any of my replies help you to solve the problem Or gain knowledge, an upvote would be appreciated.
Hello, I would like to forward data between two splunk instances in clear text. For that I use HEC. This is my outputs.conf .    [httpout] httpEventCollectorToken = <HEC_TOKEN> uri = http://hec_ta... See more...
Hello, I would like to forward data between two splunk instances in clear text. For that I use HEC. This is my outputs.conf .    [httpout] httpEventCollectorToken = <HEC_TOKEN> uri = http://hec_target:8088   I would like to inspect the events with a third party application, but they appear to be encoded in s2s. Also this configuration sends the events to the /services/collector/s2s endpoint, which is not the same one would forward clear text (JSON) events to. Is there any way to send the events in a readable format? I am aware there is syslog output. I would try it if there is no possibility to change the HEC output accordingly.  Thanks in advance.
Hi, is it possible to center and alter the font size of titles in a dashboard? I'm working with single values.
 Here i am using splunk with the version 8.2.5, and now i have found this vulnerability( CVE-2022-33891 ) for Apache Spark package and Apache hive package. hive-exec-3.1.2.jar spark-core_2.12-3.0.1... See more...
 Here i am using splunk with the version 8.2.5, and now i have found this vulnerability( CVE-2022-33891 ) for Apache Spark package and Apache hive package. hive-exec-3.1.2.jar spark-core_2.12-3.0.1.jar Can someone to suggest to which version of splunk should i use such that i can get rid of this vulnerability
I want to know the splunk cost annually for dealing 10 GB data per day
Hi all, We have successfully registered and connected a new Azure Event Hub namespace via the 'Splunk Add-on for Microsoft Cloud Services' app which is on a dedicated Azure log collector machine, b... See more...
Hi all, We have successfully registered and connected a new Azure Event Hub namespace via the 'Splunk Add-on for Microsoft Cloud Services' app which is on a dedicated Azure log collector machine, but not sure why we do not see them in Splunk SH although we have got an old one up and running. Your help much appreciated! Thank you all! 
Hi, I want to create a detector based on a custom event ingested using the API. I can select the eventType value as the signal but the conditions are all about signal values which obviously do not ... See more...
Hi, I want to create a detector based on a custom event ingested using the API. I can select the eventType value as the signal but the conditions are all about signal values which obviously do not apply to an event.   Any ideas?
Having this initial query I obtain a list of results order by Consumer, and pod messages_number container_name="pol-sms-amh-throttler" | stats avg(messages_number) as consumer_node by Consumer, pod... See more...
Having this initial query I obtain a list of results order by Consumer, and pod messages_number container_name="pol-sms-amh-throttler" | stats avg(messages_number) as consumer_node by Consumer, pod     Then I append a second stats where I want to sum all the values of pods by Consumer messages_number container_name="pol-sms-amh-throttler" | stats avg(messages_number) as consumer_node by Consumer, pod | stats sum(consumer_node) as AvgConsumption by Consumer limit=0   Is this query correct and accurate about what I'm want to achieve?    Also I don't know how can I see the AvgConsumptions  in a visualization
I have a lookup table like below: label,value op1,"Option 1" op2,"Option 2" op3,"Option 3" When I try to configure dynamic dropdown, I could keyin search string to fetch value field only. M... See more...
I have a lookup table like below: label,value op1,"Option 1" op2,"Option 2" op3,"Option 3" When I try to configure dynamic dropdown, I could keyin search string to fetch value field only. My requirement is to display values and when user chooses a value, respective label should be sent in the backend instead of a static value. Example: If user chooses "Option 2", on submission value op2 should be the value passed instead of the value user chose from the dropdown. 
Hi all. I use Splunk on my workplace and recently I feel like it's performance is decreasing. Basic search queries like my username or email address would provide results, now it wouldn't. Doesn'... See more...
Hi all. I use Splunk on my workplace and recently I feel like it's performance is decreasing. Basic search queries like my username or email address would provide results, now it wouldn't. Doesn't matter the time frame I choose, zero events. I was told that an app called "estreamer" was down and one of the infrastructure worker fixed it and claimed to restore all missing data. It was last Thursday. Sadly, he's not familiar with this system so I need to address the issue when I talk with him. Today, I still cannot search these basic strings, it gives zero events.   Any idea how I check what's wrong so I can tell the infra worker to fix certain issue/index/app?
Hello everyone. I am trying to track office and remote logins using multiple indexes with the transaction command. One of the logs has a session id so I am able to use a transaction command to track ... See more...
Hello everyone. I am trying to track office and remote logins using multiple indexes with the transaction command. One of the logs has a session id so I am able to use a transaction command to track that but it's the second piece that is difficult. The other index does not have a session id and the only thing that is similar is the username field. For remote logins, if a user signs into the remote desktop app, it will generate an authentication event along with a session id. The other index will also generate a login event. The authentication event and login event are at most a second apart, but in most circumstances are at the same exact time. If a user were to login from the office, only a login event is captured. My query is as follows but there are some issues to the results I am seeing.   (index=connection_log username="user" message="logged in") OR (index=remote_app username="user" action=auth OR action=terminateSession) | transaction username maxspan=2s keeporphans=true | transaction session_id startswith=auth endswith=terminateSession   I've tried using subsearches as well but am unable to get the desired results. Wondering if anyone else has tried to do something similar. Your help would be appreciated.   Thank you
I have a query that works, but the output calculates a percentage column in a chart.  I need to show the total of TAM and the correct percentage value for all the returned rows.  I'm using this: | ... See more...
I have a query that works, but the output calculates a percentage column in a chart.  I need to show the total of TAM and the correct percentage value for all the returned rows.  I'm using this: | inputlookup Patch-Status_Summary_AllBU_v3.csv | stats count(ip_address) as total, sum(comptag) as compliant_count by BU | eval patchcompliance=round((compliant_count/total)*100,1) | fields BU total compliant_count patchcompliance | rename BU as Domain, total as TAM, patchcompliance as "% Compliance" | appendpipe [stats sum(TAM) as TAM sum(compliant_count) as compliant_count | eval totpercent=round((comp/TAM)*100,1)] | eval TAM = tostring(TAM, "commas")   The output is: Domain TAM compliant_count % Compliance BU1 1,180 1146 97.1 BU2 2,489 2420 97.2 BU3 409,881 96653 23.6 BU4 3 3 100.0 BU5 1,404 1375 97.9 BU6 119,003 90100 75.7 BU7 33,506 30669 91.5 BU8 2,862 1997 69.8 BU9 239,897 216401 90.2 BU10 3,945 3832 97.1 BU11 569 482 84.7   814,739 445078     If I add to the appendpipe stats command avg("% Compliance") as "% Compliance" then it will not take add up the correct percentage which in this case is "54.6" but the average would display "87.1". How do I calculate the correct percentage as a total using the totals of columns TA
right now i have a cron expression like this - 0 * * * * so the report is sent out every hour. How can i generate the report only once when the condition is triggered.   Thanks! 
New to the community  I searched for this message "Unable to fetch defaults: Unable to fetch authorize defaults." but couldn't find anything relevant. Has anyone seen this message before? A... See more...
New to the community  I searched for this message "Unable to fetch defaults: Unable to fetch authorize defaults." but couldn't find anything relevant. Has anyone seen this message before? Any idea how to resolve it?
Hi Team, [host::1.(xx|xx).xx.xx(x|y)] TRANSFORMS-change_index_abc_secure = change_index_abc_secure   [change_index_abc_secure] SOURCE_KEY = MetaData:Index REGEX = os, os_secure DEST_KEY = M... See more...
Hi Team, [host::1.(xx|xx).xx.xx(x|y)] TRANSFORMS-change_index_abc_secure = change_index_abc_secure   [change_index_abc_secure] SOURCE_KEY = MetaData:Index REGEX = os, os_secure DEST_KEY = MetaData:Index FORMAT = index::abc_secure   I need to route the logs from certain host to index=abc_secure (not all the logs only os and os_secure logs)
Is it possible to build an app that contains a pre-configured `inputs.conf` in order to have (administrator defined) modular inputs emit events to a 'static' HEC input that is created (via the `input... See more...
Is it possible to build an app that contains a pre-configured `inputs.conf` in order to have (administrator defined) modular inputs emit events to a 'static' HEC input that is created (via the `inputs.conf` file) when an administrator installs the splunk app?
is there an option to update the value of a specific field within a specific artifact? I was able to update using phantom update_artifact action or with a REST call, but when the field is updated it ... See more...
is there an option to update the value of a specific field within a specific artifact? I was able to update using phantom update_artifact action or with a REST call, but when the field is updated it also delete the other existent fields in that artifact.
Hello, In the events, the severity is captured as values between 1 to 10. I want to represent them as High, Low, Medium etc. For example, if the severity is between 1 and 3  as Low if the severi... See more...
Hello, In the events, the severity is captured as values between 1 to 10. I want to represent them as High, Low, Medium etc. For example, if the severity is between 1 and 3  as Low if the severity is between 4 and 5  as Medium, and so on Please advise on how to achieve this. Thanks in advance.      
I have the following scenario. An object transitions through multiple queues , I want to query the time spent in Queue 1 and group it by object type. Each object has unique id but it generates an eve... See more...
I have the following scenario. An object transitions through multiple queues , I want to query the time spent in Queue 1 and group it by object type. Each object has unique id but it generates an event every time it transitions from queues. : Event 1:  id : 123 type : type1 status : IN_QUEUE_1 duration : 100 Event 1:  id : 123 type : type1 status : IN_QUEUE_2 duration : 150   Type         average_time_in_queue1 type1          50 type2           .... type3           ...  
Activity Result: {"IsProductValidated":"false","ErrorCodes":[{"errorCode":"PRD-202","errorMessage":"Product Validation Service Returned Error :: Reason: Options you have selected are not available at... See more...
Activity Result: {"IsProductValidated":"false","ErrorCodes":[{"errorCode":"PRD-202","errorMessage":"Product Validation Service Returned Error :: Reason: Options you have selected are not available at this time. Please change your selections."}]}