All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there! How are you doing? Our FIM tool is detecting modifications to the /etc/passwd file by the splunkfwd user found on some of our critical Linux servers for our operation with Splunk Universa... See more...
Hi there! How are you doing? Our FIM tool is detecting modifications to the /etc/passwd file by the splunkfwd user found on some of our critical Linux servers for our operation with Splunk Universal Forwarder installed. Do you know if this behavior is correct? Shouldn't it be modifying /opt/splunkforwarder/etc/passwd?   Thank you very much! Regards, Juanma   PS: when echoing $SPLUNK_HOME it appears to be blank in other users, but the tools is sending logs correctly to SplunkCloud
Hi Team, I need to decrease the number of indexers used to half, in my current configurations we have site replication factor is 5 in total with origin:3 and site searchfactor is defined as 3 in tot... See more...
Hi Team, I need to decrease the number of indexers used to half, in my current configurations we have site replication factor is 5 in total with origin:3 and site searchfactor is defined as 3 in total and origin:2. My total number of indexers is 24 and I want to decrease the count of indexers to 12. I want to have the complete process of reducing the indexer cluster size so that the buckets which have site information will not be impacted.  
How to extract alphanumeric and numeric values from aline,  both are dynamic values <Alphanumeric>_ETC_RFG: play this message: announcement/<numeric>
I created an alert from the search below, and it emails a pdf - is there a way to add the most recent event from each of the hosts in this search and add it to the email?   metadata type=hosts | wh... See more...
I created an alert from the search below, and it emails a pdf - is there a way to add the most recent event from each of the hosts in this search and add it to the email?   metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen
Hello I would like a search to show the last entry of host="1.1.1.1", and show the full entry.   Thank you
Hello, I'm trying to get a solid answer on what Splunk's laws are regarding using the Splunk Enterprise free license (0.50 GB/day) on a production system in a for-profit company.  Is this allowed or... See more...
Hello, I'm trying to get a solid answer on what Splunk's laws are regarding using the Splunk Enterprise free license (0.50 GB/day) on a production system in a for-profit company.  Is this allowed or are we required to buy the 1GB minimum license?   From the Splunk Enterprise download site: https://www.splunk.com/en_us/download/splunk-enterprise.html, it clearly states that "After 60 days you can convert to a perpetual free license...", so if my ingestion is below the 500MB/day limit, but the license in on a production system, is this legal or would I have to buy a 1GB license? Note, I haven't actually deployed Splunk Enterprise on a production system, I'm gathering all the facts before I make the move to production. Thanks.
i'm configuring a classic dashboard in the new dashboard studio and exported a classic dashboard with a drill down search, but unable to find how to configure a search in the dashboard studio. the... See more...
i'm configuring a classic dashboard in the new dashboard studio and exported a classic dashboard with a drill down search, but unable to find how to configure a search in the dashboard studio. the classic dashboard includes source value that is configured to run a search.  In dashboard studio I'm not finding the 'link to search' option.    I need to click on the source value and open a new window with the new search of the source value. this is the screen I see in drilldown on dashboard studio -- where do I go next to run a search?    
Hey Experts, I'm new to splunk and I'm trying to extract APP WEB and MNOPQ from a field called result. Can someone please guide me on how to achieve this? Any help or example queries would be greatly... See more...
Hey Experts, I'm new to splunk and I'm trying to extract APP WEB and MNOPQ from a field called result. Can someone please guide me on how to achieve this? Any help or example queries would be greatly appreciated. Thank You! Fi a:\abc\def\MNOPQ.txt content is expected to include "A H Dis Query,0,0" Fi a:\abc\def\APP.txt content is expected to include "A H Dis Query,0,0" Fi a:\abc\def\WEB.txt content is expected to include "A H Dis Query,0,0"
Hi to everyone,  I have recently installed Splunk enterprise (9.1.2) on an ubuntu 20.04 with the add-on "Splunk App for Stream" (8.1.1). . On another VM (also ubuntu 20.04, IP : 192.168.182.134 ) I... See more...
Hi to everyone,  I have recently installed Splunk enterprise (9.1.2) on an ubuntu 20.04 with the add-on "Splunk App for Stream" (8.1.1). . On another VM (also ubuntu 20.04, IP : 192.168.182.134 ) I put my UF (9.1.2). In the UF, I put the add-on "Splunk Add-on for Stream Forwarders" (8.1.1) to capture stream/packets. My streamfwd.conf file is : [streamfwd] logConfig = streamfwdlog.conf port = 8889 ipAddr = 192.168.182.134 netflowReceiver.0.decodingThreads = 4 indexer.0.uri = http://192.168.182.132:8088 [streamfwdcapture] netflowReceiver.0.ip = 192.168.182.134 netflowReceiver.0.interface = ens33 netflowReceiver.0.port = 9995 netflowReceiver.0.decoder = netflow And in my streamfwd.log I have this : 2024-02-12 01:28:47 INFO [140717870847936] (CaptureServer.cpp:817) stream.CaptureServer - Found DataDirectory: /opt/splunkforwarder/etc/apps/Splunk_TA_stream/data 2024-02-12 01:28:47 INFO [140717870847936] (CaptureServer.cpp:823) stream.CaptureServer - Found UIDirectory: /opt/splunkforwarder/etc/apps/Splunk_TA_stream/ui 2024-02-12 01:28:47 INFO [140717870847936] (CaptureServer.cpp:896) stream.CaptureServer - Default configuration directory: /opt/splunkforwarder/etc/apps/Splunk_TA_stream/default 2024-02-12 01:28:53 INFO [140717870847936] (CaptureServer.cpp:1918) stream.CaptureServer - Netflow receiver configuration defined; disabling default automatic promiscuous mode packet capture on all available interfaces. Configure one or more streamfwdcapture parameters in streamfwd.conf to enable network packet capture. 2024-02-12 01:28:53 INFO [140717870847936] (SnifferReactor/SnifferReactor.cpp:327) stream.SnifferReactor - No packet processors configured 2024-02-12 01:28:54 INFO [140717870847936] (CaptureServer.cpp:2001) stream.CaptureServer - Starting data capture 2024-02-12 01:28:54 INFO [140717870847936] (SnifferReactor/SnifferReactor.cpp:161) stream.SnifferReactor - Starting network capture: sniffer 2024-02-12 01:28:54 INFO [140717870847936] (CaptureServer.cpp:2362) stream.CaptureServer - Done pinging stream senders (config was updated) 2024-02-12 01:28:54 INFO [140717870847936] (main.cpp:1109) stream.main - streamfwd has started successfully (version 8.1.1 build afdcef4b) 2024-02-12 01:28:54 INFO [140717870847936] (main.cpp:1111) stream.main - web interface listening on port 8889 But, in my splunk_stream_app I have this :    If anyone can help me to fix this issue, I will be glad to read it.
Hi, I have the statics table panel created in the dashboard. Please could you help me to reduce the panel width? Thanks.
Why does the URA not update itself efter a scan? I've had several apps installed for more than 2 weeks, and still I get the same message: ---------------------------------------- Details This... See more...
Why does the URA not update itself efter a scan? I've had several apps installed for more than 2 weeks, and still I get the same message: ---------------------------------------- Details This newly installed App has not completed the necessary scan. Version 1.1.6 Application Path /opt/splunk/etc/apps/it_essentials_learn Required Action Please check again in 24 hours when the necessary scan is complete. --------------------------------------- Even if I force a scan, nothing changes.
I have a filter of Entity which has token t_entity and in drilldown it has All, C2V ,C2C and Cases . And I have different panels of this which is showing counts. I have a separate panel of C2V counts... See more...
I have a filter of Entity which has token t_entity and in drilldown it has All, C2V ,C2C and Cases . And I have different panels of this which is showing counts. I have a separate panel of C2V counts which I only want to show when it is selected from the filter . Filter name-Entity Token Name- t_entity How is this possible to show a panel when we select it from the filter.
Hello, I have the following data:  I want to use this data to setup a dashboard. In this dashboard I want to show the current duration of equipment where the Status is not "null" (null is a string... See more...
Hello, I have the following data:  I want to use this data to setup a dashboard. In this dashboard I want to show the current duration of equipment where the Status is not "null" (null is a string in this case and not a null value) Each JobID only has one EquipmentID The same status can occur and disappear multiple times per JobID There are around 10 different status I want to the results to show only durations above 60 seconds If the current time is 12:21 I would like the to look like this. EquipmentID   Duration Most_recent_status 2 120 Z   Time EquipmentID Status JobID 12:00 1 "null" 10 12:01 2 "null" 20 12:02 2 X 20 12:03 2 X 20 12:04 1 X 10 12:05 1 Y 10 12:06 1 Y 20 12:07 2 Y 20 12:08 1 X 10 12:09 2 Y 20 12:10 1 "null" 11 12:11 2 "null" 21 12:12 2 "null" 21 12:13 1 "null" 11 12:14 1 "null" 11 12:15 2 X 21 12:16 1 X 11 12:17 2 X 21 12:18 1 "null" 11 12:19 2 Z 21 12:20 2 Z 21   This is the query I use now only the duration_now resets every time a new event occurs  index=X sourcetype=Y JobID!=”null” |sort _time 0 | stats last(_time) as first_time last(Status) as "First_Status" latest(status) as Last_status latest(_time) as latest_times values(EquipmentID) as Equipment by JobID | eval final_duration = case(Last_status ="null", round(latest_times - first_time,2)) | eval duration_now = case(isnull(final_duration), round(now() - first_time,2)) | eval first_time=strftime(first_time, "%Y-%m-%d %H:%M:%S") | eval latest_times=strftime(latest_times, "%Y-%m-%d %H:%M:%S") | sort - first_time Any help would be greatly appreciated
Hi Team,    I want to implement HF as in HA in container setup. can you help here ? 
We have two different sites/regions into Splunk cloud one is Northamerica & other in Europe. There's an ES migration planned in such a way that all the alerts or data reporting to Europe region will ... See more...
We have two different sites/regions into Splunk cloud one is Northamerica & other in Europe. There's an ES migration planned in such a way that all the alerts or data reporting to Europe region will be migrated to NorthAmerica region. & there will be only one ES in Northamerica region.   This is a unique scenario & have never done any such migration, can the community please help me on how to plan such type of migration. Need to prepare a comprehensive plan for this ES migration & highlight all possible changes/modification/risks that needs to be addressed & also need to figure out the dependencies.   Please help here if any insights
Hello, my DB connect displaying this error when I´m trying to access: Can not communicate with task server, check your settings I´ve configured app before and all was working but then I start to ... See more...
Hello, my DB connect displaying this error when I´m trying to access: Can not communicate with task server, check your settings I´ve configured app before and all was working but then I start to receive this error in web app. DB connect app not showing any configured DBs just errors. Can you suggest ? BR
Hi guys, I've tried to setup an alert with two alert actions (email and Slack) from a custom app. When the alert has triggered, 02-09-2024 21:40:04.155 +0000 INFO SavedSplunker - savedsearch_id="n... See more...
Hi guys, I've tried to setup an alert with two alert actions (email and Slack) from a custom app. When the alert has triggered, 02-09-2024 21:40:04.155 +0000 INFO SavedSplunker - savedsearch_id="nobody;abc example alert (NONPRD)", search_type="scheduled", search_streaming=0, user="myself@myself.com", app="abc", savedsearch_name="example (NONPRD)", priority=default, status=success, digest_mode=1, durable_cursor=0, scheduled_time=1707514800, window_time=-1, dispatch_time=xxxxxxxx, run_time=0.884, result_count=2, alert_actions="email", sid="scheduler_xxxxxxxxxx", suppressed=0, thread_id="AlertNotifierWorker-0", workload_pool="standard_perf"   However, i've received email alert but not slack alert, is there anyway to debug why the slack alert was not sent when there are two alert actions? How to know when the webhook URL is correct and working? Can someone please provide the complete steps to troubleshoot issues like this? Thank you! T
Hi All, I am using a mstats for a mteric and I am evaluating my hour and minute field something like below:   | mstats rate_avg(abc*) prestats=false WHERE "index"="def" span=3m | rename rate_avg(... See more...
Hi All, I am using a mstats for a mteric and I am evaluating my hour and minute field something like below:   | mstats rate_avg(abc*) prestats=false WHERE "index"="def" span=3m | rename rate_avg(* as *, *) as * | eval Date=strftime(_time,"%m/%d/%Y") | eval hour=strftime(_time,"%H") | eval minute=strftime(_time,"%M") | transpose column_name=instance | rename "row 1" as MessagesRead | eval MessagesRead=ROUND(MessagesRead,0) | where MessagesRead < 1 Now I am unable to to use the below filter condition | search NOT (instance="*xyz*" AND hour=09 AND (minute>=00 AND minute<=15))     as I dont want to alert for a particular instance only from 9 to 9:15, but it should alert for other instance for this time period.   Now before the transpose the instance does not exist and I cant use the filter and after transpose I am unable to filter on hour and minute.   Can u please help in filtering after transpose?
I have log entries that have the following format : [<connectorName>|<scope>]<sp> The following are examples of the connector context for a connector named "my-connector": [my-connector|worker] ... See more...
I have log entries that have the following format : [<connectorName>|<scope>]<sp> The following are examples of the connector context for a connector named "my-connector": [my-connector|worker] [other-connector|task-0] [my-connector|task-0|offsets] I would like to extract the name of the connectors and build stats. The tasks or other metadata are not needed. For example : Connector Count my-connector 2 other-connector 2   As the entries have different formats, how can I do this?
Dears,        After upgraded Splunk from 9.1.2 version to 9.2.0 version, the deployment server not showing the clients, but Splunk receiving logs from clients, and also the client agents showing on ... See more...
Dears,        After upgraded Splunk from 9.1.2 version to 9.2.0 version, the deployment server not showing the clients, but Splunk receiving logs from clients, and also the client agents showing on all Splunk servers under setting --> Forwarder Managment except Deployment server, I don't know how that occurred, I didn't change anything. Kindly your support for that.   Best Regards,