All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to extract the client ip and user "DELTA\Kelly" from the windows event messages Message=The following client performed a SASL (Negotiate/Kerberos/NTLM/Digest) LDAP bind without requesting s... See more...
I want to extract the client ip and user "DELTA\Kelly" from the windows event messages Message=The following client performed a SASL (Negotiate/Kerberos/NTLM/Digest) LDAP bind without requesting signing (integrity verification), or performed a simple bind over a cleartext (non-SSL/TLS-encrypted) LDAP connection. Client IP address: 172.4.5.6:57157 Identity the client attempted to authenticate as: DELTA\Kelly Binding Type: Fixed..... Please close
Hi Splunkers, I have created a pie chart and I have applied color codes to it. I have added dropdowns for my legends as well. Consider below as my scenario. option name="charting.legend.labels">[st... See more...
Hi Splunkers, I have created a pie chart and I have applied color codes to it. I have added dropdowns for my legends as well. Consider below as my scenario. option name="charting.legend.labels">[started,failed,administratively stopped]</option> <option name="charting.seriesColors">[B6C75A,F589AD,AAABAE]</option> I got the output as expected if i choose 'ALL' in my dropdown. But If I filter any of these in the dropdown, am getting a default color which is not as expected. Kindly help to check on this. Also please help to get the color codes data as well.
HI, Is there a way where I can disable all alerts in single API call without providing saved search/alert name in all apps? Thx  
Hi, I'm trying to ingest json data but it showing data twice for each event field. I used below in props.conf and not sure what is causing the issue. [sourcetype] INDEXED_EXTRACTIONS = json K... See more...
Hi, I'm trying to ingest json data but it showing data twice for each event field. I used below in props.conf and not sure what is causing the issue. [sourcetype] INDEXED_EXTRACTIONS = json KV_MODE=none  #( tried both KV_MODE=json and None) SHOULD_LINEMERGE=true pulldown_type=true TIMESTAMP_FIELDS=<timestamp field> AUTO_KV_JSON=false NO_BINARY_CHECK = true   Data in SH: Title:  [RESOLVED] Increased Error Rates              [RESOLVED] Increased Error Rates  
Can I read the dmc_forwarder_assets lookup using the rest api of the Monitoring Console?
I'm currently looking at increasing the performance of our Splunk Search Head. I'm running a number of Apps at the request of my network engineer. However I'm noticing a number of things: Max Cu... See more...
I'm currently looking at increasing the performance of our Splunk Search Head. I'm running a number of Apps at the request of my network engineer. However I'm noticing a number of things: Max Current Search is at 12. It appears to be limited by the indexer (4 cores) Accelerating Data Models isn't hitting my search head hard, but it's behind. Possibly do to limited searches/skipped searches on. InfoSec and Palo Alto's app run about an hour behind and incredibly slow. It's kind of frustrating. Should mention that I'm currently running Splunk Indexer and Splunk Search Head (seperate servers) in Azure. Things seem descent in Azure. And am increasing the instance. But some other things I'm thinking of doing: Increasing the maximum concurrent searches on the indexer and search head from 3 to 4. I'm fairly optimistic the servers can handle it. Increasing the Azure instance. Currently using Azure B4ms for the Indexer, and B8ms for the Search Head. Realizing that might not be the best configuration... pardon my previous ignorance on these topics. Before I invest in these, I'd love to get the Splunk Communities input on all of this. I admit, Splunk is becoming very App-Heavy. Which I'm not pleased about. So any ways of increasing performance is appreciated. Aw, one last thing. I'm still fairly new to data modeling. Though I've worked with the CIM I haven't tagged everything. I'm wondering if limiting the tags to specific Data Models would be of great benefit to performance, or just harm it.
I realize this may be more of a Linux problem than a Splunk problem, but I'm using code specifically for Splunk so perhaps someone here can help. I compiled and installed collectd using the instruct... See more...
I realize this may be more of a Linux problem than a Splunk problem, but I'm using code specifically for Splunk so perhaps someone here can help. I compiled and installed collectd using the instructions at https://docs.splunk.com/Documentation/InfraApp/latest/Admin/ManageAgents.  I have an HEC configured on my Splunk instance and can write to it.   curl -k https://1.2.3.4:8088/services/collector/raw -d "Testing" {"text":"Token is required","code":2}   Yes, I know I need a token in the curl command, but this at least demonstrates connectivity. I've configured the write_http and write_splunk plugins correctly, I believe.   <Plugin write_http> <Node "example"> URL "http://1.2.3.4:8088/services/collector/raw" VerifyPeer false VerifyHost false Header "Header: Authorization: Splunk <redacted>" Format "JSON" Metrics true StoreRates true </Node> </Plugin> <Plugin write_splunk> server "1.2.3.4" port "8088" token "<redacted>" ssl true verifyssl false </Plugin>   As soon as collect starts it logs "write_http plugin: curl_easy_perform failed with status 56: Recv failure: Connection reset by peer" and does so repeatedly.  No metrics are indexed by Splunk. How do I fix this?
Hi, After applying STIG settings, I am no longer able to logon to the web console using AD or local admin account. I tried 1). renamed passwd file in etc folder and restarted Splunk service, 2) rena... See more...
Hi, After applying STIG settings, I am no longer able to logon to the web console using AD or local admin account. I tried 1). renamed passwd file in etc folder and restarted Splunk service, 2) renamed passwd file in etc folder and created user-seed.conf file in etc\system\local folder but I still can't logon.  Anyone has this issue and know how to fix it?  One of the STIG requires me to set the password complexity and lockout and change the permission on Splunk. Thanks in advance!  
Hi, I'm wanting to setup Splunk in a home lab for testing and learning purposes. Seeing that I'm so new, and that the enterprise trial license lasts for only 60 days, is it possible to start with th... See more...
Hi, I'm wanting to setup Splunk in a home lab for testing and learning purposes. Seeing that I'm so new, and that the enterprise trial license lasts for only 60 days, is it possible to start with the free license while I'm trying to learn the basics and then turn on the trial license later when I'll be able to make better use of it?
Hello While testing my workflow actions, I've noticed a really weird thing happening When a field has the word "all" in its name, the interesting fields are not shown on the event automatically. (s... See more...
Hello While testing my workflow actions, I've noticed a really weird thing happening When a field has the word "all" in its name, the interesting fields are not shown on the event automatically. (see the image in the spoiler tag for a better understanding) So to use the workflow action that I have for that field I need to manually add it by selecting the field in the "All Fields" option. Does anyone knows why this is happening? Is it expected? Is there any configuration I am missing?   Thanks      
Hi I came in today and about 5 indexes are disabled. I am getting the following messages, but i am unsure what to do?  Even after restarting i am getting the message it is disabled. 06-18-202... See more...
Hi I came in today and about 5 indexes are disabled. I am getting the following messages, but i am unsure what to do?  Even after restarting i am getting the message it is disabled. 06-18-2020 21:34:47.998 +0200 INFO IndexProcessor - indexes.conf - Rawdata integrity control (enableDataIntegrityControl) is disabled for index=_internal   06-18-2020 21:34:47.997 +0200 INFO IndexWriter - idx=_internal Handling shutdown or signal, reason=2 06-18-2020 21:34:47.998 +0200 INFO IndexProcessor - Reloading index config: shutdown subordinate threads, now restarting 06-18-2020 21:34:47.998 +0200 INFO IndexProcessor - indexes.conf - Rawdata integrity control (enableDataIntegrityControl) is disabled for index=_internal 06-18-2020 21:34:47.998 +0200 INFO HotDBManager - idx=_internal minHotIdleSecsBeforeForceRoll=4294967295 06-18-2020 21:34:47.998 +0200 INFO HotDBManager - idx=_internal Setting hot mgr params: maxHotSpanSecs=432000 maxHotBuckets=20 minHotIdleSecsBeforeForceRoll=4294967295 maxDataSizeBytes=1048576000 quarantinePastSecs=77760000 quarantineFutureSecs=2592000 06-18-2020 21:34:47.998 +0200 INFO HotDBManager - closing hot mgr for idx=_internal 06-18-2020 21:34:47.998 +0200 INFO IndexWriter - idx=_internal, Initializing,      
I was wondering if someone could help me reset my password. I need to login to do my labs to take the cert soon please and thank you.
I have an app where the default dashboard has a search bar where I can select from a number of scenarios for which I can display data.  One of those scenarios is a drop down titled "Locking".  Undern... See more...
I have an app where the default dashboard has a search bar where I can select from a number of scenarios for which I can display data.  One of those scenarios is a drop down titled "Locking".  Underneath this I am presented with a list of database environments.  I need to be able to display reports that I've written from from each of the selections under Locking.  Can someone explain to me how that might be done?  Here is what the XML for the navigation looks like.  Where is says "locktroubleshooter_prod" and "locktroubleshooter_uit", I want to call a report.  Currently, these are views that use Advanced XML an we have upgraded to 8.0.3, and the views no longer work because Advanced XML was deprecated. <nav>    <collection label="Search">       <view name="DB_Search" />    </collection>    <collection label="Dashboard">       <view name="default_dash" default="true" />    </collection>    <collection label="CPU">       <view name="cpu" />    </collection>    <collection label="Memory">       <view name="memory_instance" />       <view name="memory_set_info" />       <view name="memory_pool_info_by_set_type" />       <view name="memory_pool_info_by_pool_type" />    </collection>    <collection label="IO">       <view name="io_sync" />       <view name="io_async_data" />       <view name="io_async_index" />       <view name="io_direct" />    </collection>    <collection label="Network">       <view name="network_bandwidth" />    </collection>    <collection label="Locking">       <view name="locktroubleshooter_prod" />       <view name="locktroubleshooter_uit" />    </collection>    <collection label="Queries">       <collection label="Overview">          <view name="queries_overview" />          </collection>             <collection label="Top SQL">             <view name="queries_topsql" />          </collection>       </collection>       <collection label="Database Size">          <view name="dbsize" />          <view name="dbgrowth" />          <view name="tablesize" />          <view name="tablegrowth" />       </collection>       <collection label="Views">          <view name="helloworldview" />          <collection label="Others">             <view source="unclassified" />          </collection>    </collection> </nav>
Hello @gcusello  In my lab instance we are uploading our data  for testing purposes to check data is properly parsed or not ? But now a days we are enable to upload any data. I think there are some ... See more...
Hello @gcusello  In my lab instance we are uploading our data  for testing purposes to check data is properly parsed or not ? But now a days we are enable to upload any data. I think there are some issue with memory with that or on that test instance there are already 100 of saved searches are running. So please give me solutions that how can i stop my all the savedsearches in one go ? Or any other solutions.    
I have downloaded the binaries and am trying to implement a side car pattern per this article: https://www.appdynamics.com/blog/engineering/how-to-instrument-docker-containers-with-appdynamics/ I ... See more...
I have downloaded the binaries and am trying to implement a side car pattern per this article: https://www.appdynamics.com/blog/engineering/how-to-instrument-docker-containers-with-appdynamics/ I am following the install docs https://docs.appdynamics.com/display/PRO45/Install+the+.NET+Agent+for+Linux onto a sidecar container and exposing the directory by mounting a volume. I am accessing this volume directory from another container but when I start the application the agent doesn't seem to start up. Environment variables look correct to me pointing to the correct host and what not. Is there a certain way to start up the agent?
Hi everyone, I am using version 8.0.0 of Splunk Enterprise and I am running into a problem due to version conflict of a JavaScript library.  In one of my dashboards, I am using JavaScript for some ... See more...
Hi everyone, I am using version 8.0.0 of Splunk Enterprise and I am running into a problem due to version conflict of a JavaScript library.  In one of my dashboards, I am using JavaScript for some customization where I require moment.js for a Date-picker. The Date-picker library only works with version 2.24.0 however, Splunk also uses moment.js for its own working but an older version 2.8.3. When I try to include moment.js (version 2.24.0) there is a conflict with Splunk's version and hence I cannot use that library in my dashboard.  Please suggest me any way to resolve this conflict between Splunk's version of moment.js and my own version as the Date-picker library is only compatible with version 2.24.0 of moment.js  Shall be hugely grateful for any positive suggestions. Regards, Umair
Hi Everyone, Recently i installed Rabbitmq in Centos8 for my company. We also using Splunk Enterprise so we wants to integrate our Rabbitmq to Splunk and we wants to see,search,check our logs which ... See more...
Hi Everyone, Recently i installed Rabbitmq in Centos8 for my company. We also using Splunk Enterprise so we wants to integrate our Rabbitmq to Splunk and we wants to see,search,check our logs which is coming from Rabbitmq to in Splunk . How can i do that i dont know. I gooogled it but i didnt get info about it. May anybody help to me for this goal ? Thank you
I have installed a Splunk Forwarder version 8.0.4 on a rhel machine. After a successful install, which I am getting logs for now. I cannot run locally, on the target Universal Forwarder, commands lik... See more...
I have installed a Splunk Forwarder version 8.0.4 on a rhel machine. After a successful install, which I am getting logs for now. I cannot run locally, on the target Universal Forwarder, commands like "Splunk list monitor", "Splunk list forward-server",  "Splunk list deploy-poll", etc. For whatever reason I does not allow me to run these commands while the forwarder is on. However, if I turn the forwarder off (Splunk Stop) then I can run all of these commands without hanging. Obviously if you run a "ps -ef  | grep splunk " I find splunk processes. Often times I'll see a splunk start process or a splunk restart process in there. I believe that the process is hanging on that (not sure). But when I kill those processes it auto stops splunk and I can then run the commands (with it now off) and I can get the info I need. I have installed SPF many times and have never encountered this. Thoughts?
Hello, I am running Splunk 7.1.4 om AMI Linux, splunk web from Windows 10 desktop. I am trying to create a report that provides average time_taken, error count, #unique IPs, and total hits for 24 UR... See more...
Hello, I am running Splunk 7.1.4 om AMI Linux, splunk web from Windows 10 desktop. I am trying to create a report that provides average time_taken, error count, #unique IPs, and total hits for 24 URI "Groups" For example  cs_uri_stem=/Group1* would give cs_uri_stem=/Group1, cs_uri_stem=/Group1/subgroup1, cs_uri_stem=/Group1/subgroup2 etc The following works for one group, that I specifically define in the search  ------------------------------------------------------------------------------------------------------------------------------------ index=index host=host sourcetype=iis  cs_uri_stem=/Group1* | stats avg(time_taken) as avgtime count(eval(sc_status>=400)) as "Total #errors" dc(x_Forwarded_For) as "Total # Unique ClientIPs" min(cs_uri_stem) as cs_uri_stem  count | eval avgtime=round(avgtime,0) | eval cs_uri_stem=rtrim(cs_uri_stem,"*") _____________________________________________________________________________________ This gets me to a table with one row that has the combined data for the group1 URIs The question I have is how can I run this search  "looping"  through all 24  URI groups and end up with a table showing data for all 24 URI groups..  I have added the Group* names to a lookup file and tried using the lookup in a sub search but I could get that to work.  Also tried a number of different approaches without success. I hope I have explained the issue clearly.  Any suggestions/comments greatly appreciated.  
I'm trying to run a search and find the most common strings in a field of the results. It seems like there is a way between comm and diff but I haven't been able to figure it out. I simply want to kn... See more...
I'm trying to run a search and find the most common strings in a field of the results. It seems like there is a way between comm and diff but I haven't been able to figure it out. I simply want to know what strings are exactly the same against all results of a particular field.