All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I was able to manage/view XML for private user dashboards till last friday. Since today I realised that I am not able to even list those private dashboards under Settings > UI > Views, OR, Settings >... See more...
I was able to manage/view XML for private user dashboards till last friday. Since today I realised that I am not able to even list those private dashboards under Settings > UI > Views, OR, Settings > All Configs. As platform admins, we promote user made objects to global sharing. I compared with my teammates listed below, we all share the same role (admin) and yet they are able to view those same private dashboards that I cannot view/list. SH Version 8.0.5 in SHC.
We are testing our upgrade from Splunk 7.3.1 to 8.1.2. Once we upgrade the SH regardless of the indexers' version 7.3.1 or 8.1.2, we are unable to run any searches. Every time we attempt a search, S... See more...
We are testing our upgrade from Splunk 7.3.1 to 8.1.2. Once we upgrade the SH regardless of the indexers' version 7.3.1 or 8.1.2, we are unable to run any searches. Every time we attempt a search, Splunk crashes 100% - crash logs looks like below; - Resource usage is low as in CPU IDLE 99% memory usages is less than 10%.  - systemd and bootstart enabled   [build 545206cc9f70] 2021-02-22 17:44:41 Received fatal signal 6 (Aborted). Cause: Signal sent by PID 15454 running under UID 1863478. Crashing thread: RemoteTimelineReadThread Registers: RIP: [0x00007F8BE1E67F77] gsignal + 55 (libc.so.6 + 0x34F77) RDI: [0x0000000000003C5E] .. Backtrace (PIC build): [0x00007F8BE1E67F77] gsignal + 55 (libc.so.6 + 0x34F77) [0x00007F8BE1E6934A] abort + 314 (libc.so.6 + 0x3634A) [0x000056191C24136E] ? (splunkd + 0x128F36E) [0x000056191E355416] _ZN10__cxxabiv111__terminateEPFvvE + 6 (splunkd + 0x33A3416) [0x000056191E355461] ? (splunkd + 0x33A3461) [0x000056191E355594] ? (splunkd + 0x33A3594) [0x000056191C2632C6] ? (splunkd + 0x12B12C6) [0x000056191C263724] ? (splunkd + 0x12B1724) [0x000056191D28257B] _ZN6ThreadC1EPKcz + 443 (splunkd + 0x22D057B) [0x000056191DB44181] _ZN39EventDownloadInitializeCollectionThreadC1EP20EventDownloadManager + 33 (splunkd + 0x2B92181) [0x000056191DB490A8] _ZN20EventDownloadManagerC2ERK3Str + 600 (splunkd + 0x2B970A8) [0x000056191D7DF4BF] _ZN9TimelinerC2ERK3Str + 319 (splunkd + 0x282D4BF) [0x000056191C523782] _ZN24RemoteTimelineReadThread4mainEv + 114 (splunkd + 0x1571782) [0x000056191D281F87] _ZN6Thread8callMainEPv + 135 (splunkd + 0x22CFF87) [0x00007F8BE21E1724] ? (libpthread.so.0 + 0x8724) [0x00007F8BE1F1FEED] clone + 109 (libc.so.6 + 0xECEED) Linux / ito044165 / 4.4.180-94.130-default / #1 SMP Fri Sep 4 09:11:24 UTC 2020 (e5e5142) / x86_64 C++ exception: exception_addr=0x7f8be158bfc0 typeinfo=0x56191f0bf6e8, name=15ThreadException what(): RemoteTimelineReadThread: about to throw a ThreadException: pthread_create: Resource temporarily unavailable; 20 threads active
Our Splunk  SH cluster scheduler stopping, users complaining that alerts/scheduled reporting not running or processing. We disabled and enabled the scheduler on the captain but that didnt work. We de... See more...
Our Splunk  SH cluster scheduler stopping, users complaining that alerts/scheduled reporting not running or processing. We disabled and enabled the scheduler on the captain but that didnt work. We decided to switch captaincy to another and that worked - scheduling/processing resumed. Today we had reoccurrence but on a different Search head cluster - we switched captains and that remediated issue again.  Version 8.0.6 and recent change - cascading bundle replication enabled around a month ago.
Hello- I'm trying to create a report that groups DNS name with Identification Number (QID) from a vulnerability report. I'm using |stats value(dns) which works well enough, but the problem is a colu... See more...
Hello- I'm trying to create a report that groups DNS name with Identification Number (QID) from a vulnerability report. I'm using |stats value(dns) which works well enough, but the problem is a column 'first discovered' may have 3-4 values depending on the instance being scanned: DNS QID First Discovered SLA Status Server-1 Server-2 Server-3 Q-3333 1-1-2021 Overdue Server-4 Server-5 Q-3333 2-1-2021 OK   The problem though, is that I need Servers 4-5 to appear with Servers 1-3 under the '1-1-21' date due to SLA requirements (it doesn't matter if a vulnerability when the QID was discovered on a specific device, we're only concerned with it's first detection). Ideally it'd look like this: DNS QID First Discovered SLA Status Server-1 Server-2 Server-3 Server-4 Server-5 Q-3333 1-1-2021 Overdue Here's relevant parts of my query:       %base search% | stats values(DNS) as DNSf BY QID, POAM_ID, Controls, signature, DIAGNOSIS, dvc, OS, firstdisc, duedate, PATCHABLE          Thanks!
My SplunkD Service randomly stops on my Windows Server 2012 making it so that I cannot reach the web interface. I have to restart the service on the Windows server in order to make it reachable again... See more...
My SplunkD Service randomly stops on my Windows Server 2012 making it so that I cannot reach the web interface. I have to restart the service on the Windows server in order to make it reachable again. Is there something I should look at to fix this issu
We are trying to create a health rule self service mode for our developers and we have some doubts on how to explain the difference between using standard deviations and baseline percentage. Which i... See more...
We are trying to create a health rule self service mode for our developers and we have some doubts on how to explain the difference between using standard deviations and baseline percentage. Which is the difference between the calculation of both of them? And which are the best scenarios to use one or another? Regards!
How do you update the time in phantom for daylight savings time? We are using NTP on the server and the server time is fine, and the user and system time are set properly, but the time is still an ho... See more...
How do you update the time in phantom for daylight savings time? We are using NTP on the server and the server time is fine, and the user and system time are set properly, but the time is still an hour behind. We also restarted phantom since the time change.
I have one csv file  with empid as key and name as value..sample Data looks like emp id is E101  value is John ..now I want to pass key data as E101 as want to fetch emp name  . ie John.  Please not... See more...
I have one csv file  with empid as key and name as value..sample Data looks like emp id is E101  value is John ..now I want to pass key data as E101 as want to fetch emp name  . ie John.  Please note that I am getting empid value from splunk search itself part of query and based on that empid want to fetch emp name from csv file. ..As emp name is not present in log I have uploaded mapping of emp id and emp name via csv file.. Can any one help me on this.. @gokadroid ‎
Hello Guys, I am trying to create a search where I want to retrieve following week's diskusage values using a 95 percentile confidence interval and extract the earliest moment the partition could ru... See more...
Hello Guys, I am trying to create a search where I want to retrieve following week's diskusage values using a 95 percentile confidence interval and extract the earliest moment the partition could run out of space. So far I have come up with this search but I am not able get the value of full_result in my search.      | mstats avg(LogicalDisk.Free_Megabytes) as DiskFree WHERE index=blah-cloud-metrics AND host!=DEV-* (instance=C: OR instance=D: OR instance=G: OR instance=H:) span=10m by host,instance | eval full_result = host+"****"+instance | timechart span=10m avg(DiskFree) as DiskFree | predict future_timespan=1008 DiskFree|search "lower95(prediction(DiskFree))" <= 0 |head 1| table _time,lower95(prediction(DiskFree)),full_result      Essentially I need the host, drive (instance), lower95(diskfree) and _time values in the output of my search. Feel free to post your suggestions if you have better search options in mind. Thank You! Regards, tafzal
Hi, I need to import the security and application logs of many windows servers to splunk, but for security reasons I cannot install a splunk universal forwarder instance, I read on the splunk docume... See more...
Hi, I need to import the security and application logs of many windows servers to splunk, but for security reasons I cannot install a splunk universal forwarder instance, I read on the splunk documentation that it is not recommended to use wmi to import the logs .. . What do you recommend? thanks!
How do I check to see if a couple of Hosts are sending data to Splunk Enterprise. They both are VMs.
Hi, I am using heatmap to display the buffer time, it uses only the count for the specific time frame. So, I converted the HH:MM:SS to minutes and used as a count. But I want to show the buffer time... See more...
Hi, I am using heatmap to display the buffer time, it uses only the count for the specific time frame. So, I converted the HH:MM:SS to minutes and used as a count. But I want to show the buffer time in both minutes and HH:MM:SS along with the process name in the tooltip when I mouseover. Currently only the x axis and yaxis values(minutes and date) are shown I want to add more values to the tooltip so that I can see HH:MM:SS, minutes, process name and the date.  Could you please help me to know what are possible changes required in the heatmap js or any alternate solution. Sample screenshot of heatmap showing only the xaxis and yaxis value. Need to show additional values like minutes in HH:MM:SS and process name in this tooltip. The js which I used is the js came with heatmap app in Splunk.  
Hi, I have a weird requirement where I want to find out - If a user as signed into app1, then count them in results. Below is the query which shows signed into app1-   index=test | search apiKey=... See more...
Hi, I have a weird requirement where I want to find out - If a user as signed into app1, then count them in results. Below is the query which shows signed into app1-   index=test | search apiKey=XXXXX | search (event_name=cable.signin.success AND app_version="1.0.1")   BUT if the same user has signed into app1 and then signed into app2 exclude them from results. Below is the query which shows user signed into app2   index=test | search apiKey=XXXXX | search (event_name=cable.signin.success AND app_version="1.0.2")   Once that is done I want to dedup the customers (field - uid) and then show the result. Do i need to make use of sub search or is there a better way to do this? Let me know if someone can help
I've got a bit of a weird situation and I don't have the Splunk technical know-how to fix it myself, so I thought I'd put it here and see if someone else has a solution. I'm using a search | inp... See more...
I've got a bit of a weird situation and I don't have the Splunk technical know-how to fix it myself, so I thought I'd put it here and see if someone else has a solution. I'm using a search | inputlookup CISOVRMTier0Unixweekly.csv | search pluginName IN ("*Java*" "*java*") NOT pluginID IN (83186 83186 87011 87171 87312 90709 92606 94511 96610 96803 138506 139583 140504) | rex field=pluginText "remote host :[\r\n][\r\n](?<pluginText1>[\w\W]*)" | rex field=pluginText "Remote package installed : (?<RHEL>.+)" max_match=0 | makemv delim=" " pluginText1 | mvexpand pluginText1 | rex field=pluginText1 "Path : (?<Path>.+)" max_match=0 | rex field=pluginText1 "Installed version : (?<Installed>.+)" max_match=0 | fillnull value=NULL Path | eval Installed=case(Path="NULL",RHEL, 1=1, Installed) | mvexpand Path | eval patchPubDate=strptime(patchPubDate, "%m/%d/%Y") | stats min(patchPubDate) as patchPubDate last(dnsName) as dnsName last(netbiosName) as netbiosName max(vprScore) as vprScore values(Name) as Name values(macAddress) as macAddress values(EIR) as EIR values(Acronym) as Acronym values(Environment) as Environment values(CMDB-OS) as CMDB-OS values(PortfolioMgr) as PortfolioMgr values(ProgMgr) as ProgMgr values(SCMgr) as SCMgr values(SCBPL) as SCBPL values(ISSO) as ISSO values(CMDB_Name) as CMDB_Name values(HostName) as HostName by Path Installed ip operatingSystem | eval patchPubDate=strftime(patchPubDate, "%x") | table CMDB_Name HostName ip Path Installed operatingSystem vprScore patchPubDate Name dnsName macAddress EIR Acronym Environment CMDB-OS PortfolioMgr ProgMgr SCMgr SCBPL ISSO Which works great, but I don't like the part that reads: | makemv delim=" " pluginText1 Which exists to represent two carriage returns, but I don't know what I could do to replace it. I've tried variations of [\r\n] and they don't seem to work, and I don't know what I'm doing wrong. Can someone offer me some suggestions or ideas?
I have a htmn POST I would like to make from the search head when a user clicks on a button in their browser. Currently, I am creating a search that runs a custom command.  Due to the nature of the ... See more...
I have a htmn POST I would like to make from the search head when a user clicks on a button in their browser. Currently, I am creating a search that runs a custom command.  Due to the nature of the issue, a user can generate 200 events in a few seconds.  As you would expect, I am hitting limits all over the place. I have a .js that builds the Splunk command to call a custom command that does an HTML POST.  I would like to skip Splunk as I can.  The problem I have is the javascript runs on the local web browser and the POST needs to be on the private network that Splunk is on.  I am taking advantage of Splunk's authentication to run the search, so I would like to do the same with a script. I am presuming this is either built-in and I missed it, or really hard.  Does anyone have any ideas?
hello I am looking for very didactical documentations on the way to onboard data in Splunk Is anybody has useful links please? Rgds
Hello All, What is the difference between packaging a Splunk app using Splunk's Packaging Toolkit and packaging the app via the Splunk Package App command? I've packaged the app both ways: When I ... See more...
Hello All, What is the difference between packaging a Splunk app using Splunk's Packaging Toolkit and packaging the app via the Splunk Package App command? I've packaged the app both ways: When I use Splunk's Packaging Toolkit I get a tar.gz. When I use the Splunk command splunk package app TestApp I get a .SPL Which isn't an issue, I can convert an SPL into a tar.gz. My question is does the Packaging Toolkit do something that the Splunk package app command can't?  
Let's say I create an alert for when the count of field_A is greater than 10 for any one user_id. The alert looks back 7 days to see if count(field_A) > 10 for one user_id and the alert checks daily.... See more...
Let's say I create an alert for when the count of field_A is greater than 10 for any one user_id. The alert looks back 7 days to see if count(field_A) > 10 for one user_id and the alert checks daily.  Currently, if user_id 12345 has a count(field_A) > 10 on Monday, my alert will continue to trigger on user_id 12345 until the following Monday, creating redundant alerts. I have to look back a week and I have to check daily.  How can I prevent user_id 12345 from triggering the alert 7 days in a row, assuming that user's activity has stopped? It seems like I could log the alert results somehow and check to see if user_id 12345 is in the past week's alerts, but I'm not sure how to make this happen. I have created an alert event to both email and log the alerts but I am only getting the emails. My log event is set up with default values except I set sourcetype to "splunk_alerts". I left index as the default, I've tried leaving Host blank or specifying "localhost"....I don't know what else to try and I'm not sure if this is the right rabbit hole to continue down since I don't know if the alert results with user_ids even get logged, or if it's just a log event saying that there was an alert.  The only alternative that comes to mind is having the alert create a csv and then creating some job to import that csv to splunk and comparing to those logs for future alerts. This seems like a hassle though.  I am a Power user btw. 
Hello,  I was wondering where should I click to access this: /etc/system/default I need to edit  https://docs.splunk.com/Documentation/Splunk/latest/Data/Advancedsourcetypeoverrides  
Hi, We have an index that is feeding in data from an EKS/K8s infrastructure and getting roughly 4million events / 15 minutes (during peak). The index is doing roughly 80GB/day. Running queries on t... See more...
Hi, We have an index that is feeding in data from an EKS/K8s infrastructure and getting roughly 4million events / 15 minutes (during peak). The index is doing roughly 80GB/day. Running queries on the data works great if you search within the current day however running historical searches on the data even using the proper fields specific to what I want to search for takes a very long time and the load on my indexers shoots up very high. I have not modified any of the index params for this index in indexes.conf. This is a smartstore index and I have roughly 500GB of cache setup for caching locally. If anyone could let me know what tweaks might be best for this it would be greatly appreciated.