All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers. I would like to ask for some advice from you, as  we are planning to replace a lot of rsync scripts that we use to distribute apps to all of our deployment servers. We have an arch... See more...
Hello Splunkers. I would like to ask for some advice from you, as  we are planning to replace a lot of rsync scripts that we use to distribute apps to all of our deployment servers. We have an architecture of 5 different tenants, that are pretty much completly isolated from eachothers. Because of that, we have one deployment server in each tenant. To centrally manage all this tenants, we have one "master" server, where we keep all our splunk configuration (apps, serverclasses etc.), and uses scripts based on rsync to push them out to the other deployment servers. I have an impression of that using tools like ansible or puppet etc. has become the "industry standard" of the way of handling such big Splunk multi-tenant enviroments. Found this presentation from CONF19, held by Splunk themself, that shows how to utilize ansible to achieve this: FN2048.pdf (splunk.com) As of what i understand, the alternative to using an 3rd party tools (ie. ansible) for this, would be to use a "Master/Slave" configuration for the deployment servers, having the master deployment server to push apps to "/opt/splunk/etc/deployment-apps/" to other slave deployment servers with such config: [serverClass:secondaryDeploymentServersDeploymentApps] targetRepositoryLocation = $SPLUNK_HOME/etc/deployment-apps (source: https://community.splunk.com/t5/Deployment-Architecture/How-to-set-up-Multiple-Deployment-Servers-Configuration/m-p/45392 ) We want to get rid of all theese scripts for syncing indexers, standalone search heads, search head clusters and UF's, so we are trying to find the best way. My question is, is there any advantages or disadvantages with theese two models? The "splunk only" method of doing this, doesnt seem to be nearly as popular using ansible? In advance, a bit thank you for any advice
My requirement is to utilize the results of the sub-search and use it with the results of the main search results, but the sourcetype/source is different for the main search and sub-search, Im not ge... See more...
My requirement is to utilize the results of the sub-search and use it with the results of the main search results, but the sourcetype/source is different for the main search and sub-search, Im not getting the excepted results when using format command or $field_name, inputlookup host.csv - consists of list of hosts to be monitored main search    index=abc source=cpu sourcetype=cpu CPU=all [| inputlookup host.csv ] | eval host=mvindex(split(host,"."),0) | stats avg(pctIdle) AS CPU_Idle by host | eval CPU_Idle=round(CPU_Idle,0) | eval warning=15, critical=10 | where CPU_Idle<=warning | sort CPU_Idle sub-search [search index=abc source=top | dedup USER | return $USER]  there is a field host , which is common in both, ,the events from index=abc source=cpu sourcetype=cpu does not contain a USER field, since the USER field is there when source=top, not in source=cpu
I want to find all spans of a particular service whose duration is greater than a particular amount. The only duration control that I can find limits the search to traces whose entire duration (of wh... See more...
I want to find all spans of a particular service whose duration is greater than a particular amount. The only duration control that I can find limits the search to traces whose entire duration (of which the duration of the span for the service I'm interested in is only a part) is in the specified range.  Is there a way to do what I want?
We have created an experiment in MLTK and published a model for it, is there a way other viewers can see the experiment?  Everyone seems to be able to see only their own experiments when navigating t... See more...
We have created an experiment in MLTK and published a model for it, is there a way other viewers can see the experiment?  Everyone seems to be able to see only their own experiments when navigating to the experiments tab.  I would have expected to see a Permissions option in the Manage drop down menu.
Hi all, I'm very new to Splunk, but have had some success using Dashboard Studio to display storage aggregate capacity. I have a SizeUsed field which gives me the % full of the aggregate at vario... See more...
Hi all, I'm very new to Splunk, but have had some success using Dashboard Studio to display storage aggregate capacity. I have a SizeUsed field which gives me the % full of the aggregate at various points in time.  I have set the queryParameters to earliest="-365d", and latest="now". This is the search I am using to display the current % full in a SingleValue chart.   index="lgt_netapp_prod"  source type="netapp:aggregate:csv"  Name="type_ctry_2000_h01_n01_fsas_02" | timechart last(SizeUsed) span=3d I also have an Area chart on the same dashboard showing the growth mapped out over 12 months. I would like to calculate the number of days till the aggregate is full, using the daily growth rate of the aggregate over a 12 month period. The logic for this, could be something like: dailyGrowth = (last(SizeUsed) - first(SizeUsed))/365 capacityRemaining = 100 - last(SizeUsed) daysTillFull = capacityRemaining / dailyGrowth Unfortunately, I havent been able to figure out the syntax which would allow me to use the values in this way, and then display the result in a chart. Is it possible someone could point me in the right direction here?  It would be a real feather in my cap if I could make this work for my employers. Cheers.....
Hi all. I currently experiencing an issue where simple strings won't provide any events while two weeks ago I had. Doesn't matter the time frame. Tried "All time" and still zero events. So, I wis... See more...
Hi all. I currently experiencing an issue where simple strings won't provide any events while two weeks ago I had. Doesn't matter the time frame. Tried "All time" and still zero events. So, I wish to see if there is an issue with an index being disable or not working properly.   Is there a search query I can use to find these indexes?
I have a pie chart displaying the top 10 ip address for the past 60 minutes, and I'm trying to figure out how to then be able to click that bit of the pie chart, to then open a new window relevant in... See more...
I have a pie chart displaying the top 10 ip address for the past 60 minutes, and I'm trying to figure out how to then be able to click that bit of the pie chart, to then open a new window relevant information about that specific ip address instead of all the IP addresses in the pie chart
splunk>enterprise を使用しています。 ログ収集対象者の所属部署別で Deployment Server(サーバークラス)を作成し 該当するサーバクラスへクライアント追加しています。 サーチ欄で検索すると、全てのクライアントのログが検索せれてしますのですが これを特定のDeployment Server(サーバークラス)に所属するクライアントのログのみ 検索したい... See more...
splunk>enterprise を使用しています。 ログ収集対象者の所属部署別で Deployment Server(サーバークラス)を作成し 該当するサーバクラスへクライアント追加しています。 サーチ欄で検索すると、全てのクライアントのログが検索せれてしますのですが これを特定のDeployment Server(サーバークラス)に所属するクライアントのログのみ 検索したいのですが、そういった条件付けは可能でしょうか? よろしくお願いいたします。
Hello Splunkers, I'm sharing a temporary solution for "A custom JavaScript error caused an issue loading your dashboard"  popup message when your dashboard has any console errors.       ... See more...
Hello Splunkers, I'm sharing a temporary solution for "A custom JavaScript error caused an issue loading your dashboard"  popup message when your dashboard has any console errors.         Basically, this error message indicates that there is some javascript error during the execution of the script and you can easily check by doing inspect as well.  Before applying this solution I suggest identifying the javascript error and resolve in case there is scope.  Bcoz it may impact the logic.  If you are trying to resolve this issue and taking time then this temporary solution is good for you   This is a Javascript-based solution, which overrides the popup container and makes it empty whenever it tries to populate with the error message.  You can put this JS Code in your dashboard's Custom JS file or create the common file and use it in multiple dashboards.       require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc) { console.log($('[data-test="layer-container"]')); $('[data-test="layer-container"]').on('DOMSubtreeModified', function(){ console.log('changed two'); $('[data-test="layer-container"]').empty(); }); $('[data-test="layer-container"]').empty(); });           I have tried this with Splunk Enterprise Version: 9.0.2 Build: 17e00c557dc1. In case you find any difficulties please let us know.   I hope this will help you. Happy Splunking Thanks KV If any of my replies help you to solve the problem Or gain knowledge, an upvote would be appreciated.
Hello, I would like to forward data between two splunk instances in clear text. For that I use HEC. This is my outputs.conf .    [httpout] httpEventCollectorToken = <HEC_TOKEN> uri = http://hec_ta... See more...
Hello, I would like to forward data between two splunk instances in clear text. For that I use HEC. This is my outputs.conf .    [httpout] httpEventCollectorToken = <HEC_TOKEN> uri = http://hec_target:8088   I would like to inspect the events with a third party application, but they appear to be encoded in s2s. Also this configuration sends the events to the /services/collector/s2s endpoint, which is not the same one would forward clear text (JSON) events to. Is there any way to send the events in a readable format? I am aware there is syslog output. I would try it if there is no possibility to change the HEC output accordingly.  Thanks in advance.
Hi, is it possible to center and alter the font size of titles in a dashboard? I'm working with single values.
 Here i am using splunk with the version 8.2.5, and now i have found this vulnerability( CVE-2022-33891 ) for Apache Spark package and Apache hive package. hive-exec-3.1.2.jar spark-core_2.12-3.0.1... See more...
 Here i am using splunk with the version 8.2.5, and now i have found this vulnerability( CVE-2022-33891 ) for Apache Spark package and Apache hive package. hive-exec-3.1.2.jar spark-core_2.12-3.0.1.jar Can someone to suggest to which version of splunk should i use such that i can get rid of this vulnerability
I want to know the splunk cost annually for dealing 10 GB data per day
Hi all, We have successfully registered and connected a new Azure Event Hub namespace via the 'Splunk Add-on for Microsoft Cloud Services' app which is on a dedicated Azure log collector machine, b... See more...
Hi all, We have successfully registered and connected a new Azure Event Hub namespace via the 'Splunk Add-on for Microsoft Cloud Services' app which is on a dedicated Azure log collector machine, but not sure why we do not see them in Splunk SH although we have got an old one up and running. Your help much appreciated! Thank you all! 
Hi, I want to create a detector based on a custom event ingested using the API. I can select the eventType value as the signal but the conditions are all about signal values which obviously do not ... See more...
Hi, I want to create a detector based on a custom event ingested using the API. I can select the eventType value as the signal but the conditions are all about signal values which obviously do not apply to an event.   Any ideas?
Having this initial query I obtain a list of results order by Consumer, and pod messages_number container_name="pol-sms-amh-throttler" | stats avg(messages_number) as consumer_node by Consumer, pod... See more...
Having this initial query I obtain a list of results order by Consumer, and pod messages_number container_name="pol-sms-amh-throttler" | stats avg(messages_number) as consumer_node by Consumer, pod     Then I append a second stats where I want to sum all the values of pods by Consumer messages_number container_name="pol-sms-amh-throttler" | stats avg(messages_number) as consumer_node by Consumer, pod | stats sum(consumer_node) as AvgConsumption by Consumer limit=0   Is this query correct and accurate about what I'm want to achieve?    Also I don't know how can I see the AvgConsumptions  in a visualization
I have a lookup table like below: label,value op1,"Option 1" op2,"Option 2" op3,"Option 3" When I try to configure dynamic dropdown, I could keyin search string to fetch value field only. M... See more...
I have a lookup table like below: label,value op1,"Option 1" op2,"Option 2" op3,"Option 3" When I try to configure dynamic dropdown, I could keyin search string to fetch value field only. My requirement is to display values and when user chooses a value, respective label should be sent in the backend instead of a static value. Example: If user chooses "Option 2", on submission value op2 should be the value passed instead of the value user chose from the dropdown. 
Hi all. I use Splunk on my workplace and recently I feel like it's performance is decreasing. Basic search queries like my username or email address would provide results, now it wouldn't. Doesn'... See more...
Hi all. I use Splunk on my workplace and recently I feel like it's performance is decreasing. Basic search queries like my username or email address would provide results, now it wouldn't. Doesn't matter the time frame I choose, zero events. I was told that an app called "estreamer" was down and one of the infrastructure worker fixed it and claimed to restore all missing data. It was last Thursday. Sadly, he's not familiar with this system so I need to address the issue when I talk with him. Today, I still cannot search these basic strings, it gives zero events.   Any idea how I check what's wrong so I can tell the infra worker to fix certain issue/index/app?
Hello everyone. I am trying to track office and remote logins using multiple indexes with the transaction command. One of the logs has a session id so I am able to use a transaction command to track ... See more...
Hello everyone. I am trying to track office and remote logins using multiple indexes with the transaction command. One of the logs has a session id so I am able to use a transaction command to track that but it's the second piece that is difficult. The other index does not have a session id and the only thing that is similar is the username field. For remote logins, if a user signs into the remote desktop app, it will generate an authentication event along with a session id. The other index will also generate a login event. The authentication event and login event are at most a second apart, but in most circumstances are at the same exact time. If a user were to login from the office, only a login event is captured. My query is as follows but there are some issues to the results I am seeing.   (index=connection_log username="user" message="logged in") OR (index=remote_app username="user" action=auth OR action=terminateSession) | transaction username maxspan=2s keeporphans=true | transaction session_id startswith=auth endswith=terminateSession   I've tried using subsearches as well but am unable to get the desired results. Wondering if anyone else has tried to do something similar. Your help would be appreciated.   Thank you
I have a query that works, but the output calculates a percentage column in a chart.  I need to show the total of TAM and the correct percentage value for all the returned rows.  I'm using this: | ... See more...
I have a query that works, but the output calculates a percentage column in a chart.  I need to show the total of TAM and the correct percentage value for all the returned rows.  I'm using this: | inputlookup Patch-Status_Summary_AllBU_v3.csv | stats count(ip_address) as total, sum(comptag) as compliant_count by BU | eval patchcompliance=round((compliant_count/total)*100,1) | fields BU total compliant_count patchcompliance | rename BU as Domain, total as TAM, patchcompliance as "% Compliance" | appendpipe [stats sum(TAM) as TAM sum(compliant_count) as compliant_count | eval totpercent=round((comp/TAM)*100,1)] | eval TAM = tostring(TAM, "commas")   The output is: Domain TAM compliant_count % Compliance BU1 1,180 1146 97.1 BU2 2,489 2420 97.2 BU3 409,881 96653 23.6 BU4 3 3 100.0 BU5 1,404 1375 97.9 BU6 119,003 90100 75.7 BU7 33,506 30669 91.5 BU8 2,862 1997 69.8 BU9 239,897 216401 90.2 BU10 3,945 3832 97.1 BU11 569 482 84.7   814,739 445078     If I add to the appendpipe stats command avg("% Compliance") as "% Compliance" then it will not take add up the correct percentage which in this case is "54.6" but the average would display "87.1". How do I calculate the correct percentage as a total using the totals of columns TA