All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi! I have an issue with a query and the dedup command.    | eval service=case( (method="GET" AND match(uri, "/v1/[a-zA-Z]{2}/customers\?.*searchString=[^&]+.*")), "FindCustomer", (method="G... See more...
Hi! I have an issue with a query and the dedup command.    | eval service=case( (method="GET" AND match(uri, "/v1/[a-zA-Z]{2}/customers\?.*searchString=[^&]+.*")), "FindCustomer", (method="GET" AND match(uri, "/v2/[A-Za-z]{2}/private-customers(/[a-zA-Z0-9-]+)?(?!/)")), "ReadCustomer") | stats count by service   Unfortunately I have been noticing that my events matching to "ReadCustomer" are logged twice. Therefore I get two events right after each other with a couple of seconds in between, which is polluting my results. I need to somehow duplicate events, which have the same uri and happen within 10s of each other. I was thinking to use    |dedup uri    but realized that I want to allow the same uri, if it is more than 10 seconds between the events. If dedup could take a span, that would be the optimal way for me. Does anyone have a good idea on how to solve this? I was also thinking about | transaction  as well but I'm not sure if I can use it...
Hi all, We used flutter appdynamics agent 23.3.0 and it shows crash stack trace in crash details from appd dashboard. However, it doesn't show anymore after we upgrade agent version to 23.12.0. It s... See more...
Hi all, We used flutter appdynamics agent 23.3.0 and it shows crash stack trace in crash details from appd dashboard. However, it doesn't show anymore after we upgrade agent version to 23.12.0. It shows only Exception name like this. I checked appd logs with verbose and it looks like it is sending crash report event with stack trace normally. Is there something that I was missing? Thanks.
Hello All, Below is my alert script, and I dont want to have any alerts during night 11:50 to 00:25 midnight, however I am getting them and its triggering alert to the support team. this is the dai... See more...
Hello All, Below is my alert script, and I dont want to have any alerts during night 11:50 to 00:25 midnight, however I am getting them and its triggering alert to the support team. this is the daily restart window for interfaces and no need of alerts during this time. index=XXX sourcetype=XXX punct="--_::.,_=\"\"" | rex field=_raw "\d*-\d*-\d*\s(?<hour>\d*:\d*):\d*\S\d*\S" | search hour!=23:50 | search hour!=00:15 | table _time SITE Appreciate help on this.   Below is the sample event 2024-03-20 06:32:08.046, SITE="UU3"
I am searching some logs in an application for the last 24 hours (or any time range the user has selected). Is it possible to search the same logs in another application for the next day?  Eg: if th... See more...
I am searching some logs in an application for the last 24 hours (or any time range the user has selected). Is it possible to search the same logs in another application for the next day?  Eg: if the user has selected the time range as last one hour, can I see the trajectory of those logs over a period of next day?
Hello, How to search based on variable?    If select contains "many", then search no IN (1 to 30),  else search NO 7 | eval  variable = if(select="many", "(1-30)", "7")  | search no IN ... See more...
Hello, How to search based on variable?    If select contains "many", then search no IN (1 to 30),  else search NO 7 | eval  variable = if(select="many", "(1-30)", "7")  | search no IN variable             ==>    This doesn't work | search no IN (7)     ==>    This works | search no IN (1,2,3,4,5,6,7,8,9,10,11)   ==>    This works,   but I have to manually put the number | where variable IN (1,2,3,4,5,6,7,8,9,10,11) ==>    This  does not work ( although the Splunk documentation said it should work) https://docs.splunk.com/Documentation/SCS/current/SearchReference/WhereCommandOverview | regex no= "([1-30])"   ==>   This works    | regex no = variable   ==>   This does not work   (variable) Thank you for your help
I'm trying to search for a specific phrase with the search below but I only want result1, not result2. The issue here, I guess, is that parts of the phrase I'm searching for is present in both result... See more...
I'm trying to search for a specific phrase with the search below but I only want result1, not result2. The issue here, I guess, is that parts of the phrase I'm searching for is present in both results (same phrase marked in bold) -> Search: index=example host=example message_name=* AND profileId="xxxx-xxxxx-xxxxx" AND "deviceClass":"example" AND "Message received: {"name":"screenView","screenName":"assetcard"" Result1: MessageReceiver:96 - Message received: {"name":"screenView","screenName":"assetcard","previous":{"name":"screenView","screenName":"homeScreen","subscreenName":"STB.TOP.HOME" Result2: MessageReceiver:96 - Message received: {"name":"screenView","screenName":"homeScreen","previous":{"name":"screenView","screenName":"assetcard"
Hi, We are getting below error on the machines running with Network Toolkit app. It's affecting the Data forwarding to Splunk cloud. Please help.   0000 ERROR ExecProcessor [5441 ExecProcessorSc... See more...
Hi, We are getting below error on the machines running with Network Toolkit app. It's affecting the Data forwarding to Splunk cloud. Please help.   0000 ERROR ExecProcessor [5441 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/network_tools/bin/ping.py"   self.logger.warn("Thread limit has been reached and thus this execution will be skipped for stanza=%s, thread_count=%i", stanza, len(self.threads))   Thanks!  
Hi,  I ran the HTTP splunk integrations with https://splunkbase.splunk.com/app/5904  I have a problem sending a put request to splunk ES according to the https://docs.splunk.com/Documentation/ES/l... See more...
Hi,  I ran the HTTP splunk integrations with https://splunkbase.splunk.com/app/5904  I have a problem sending a put request to splunk ES according to the https://docs.splunk.com/Documentation/ES/latest/API/NotableEventAPIreference#.2Fservices.2Fnotable_update  I don't know what the body and header should look like, the connection works for me because it returns that I have access, Status Code: 400 Data from server: \"ValueError: One of comment, newOwner, status, urgency, disposition is required". \ Anyone encountered this probleme ?
Hi, We needed to get a list of all the knowledge objects owned by a user in our SplunkCloud instance via APIs. I have access to SplunkCloud ACS and am able to get a list of all users but it does no... See more...
Hi, We needed to get a list of all the knowledge objects owned by a user in our SplunkCloud instance via APIs. I have access to SplunkCloud ACS and am able to get a list of all users but it does not have details of the knowledge objects owned by the user. Is it possible to get this data via ACS ? How can i get this data ? Details of our SplunkCloud instance Version: 9.1.2308.203 Experience: Victoria          
I'm trying to connect my SOAR cluster (primary & warm backup) to an external postgresql database. the docs seem to only have information on backing up a SOAR node into another or backing up an exist... See more...
I'm trying to connect my SOAR cluster (primary & warm backup) to an external postgresql database. the docs seem to only have information on backing up a SOAR node into another or backing up an existing SOAR cluster with an already existing external postgresql db server. How can I connect my splunk SOAR nodes to an external db? to be specific, if I already backed up the phantom db and restored it onto the external server (using postgres docs), how can I "tell" splunk SOAR to start using the external db instead of the internal socket?
Hello dear Splunk experts, I like to understand a "re-connection hickup": One of my indexers from the index cluster needed a timeout. So I used   ~/bin/splunk offline --decommission_node_force_ti... See more...
Hello dear Splunk experts, I like to understand a "re-connection hickup": One of my indexers from the index cluster needed a timeout. So I used   ~/bin/splunk offline --decommission_node_force_timeout 3600   to take it (kind of gracefully) offline. The cluster master shows "Restarting" (~/bin/splunk show cluster-status --verbose) so I started working on it and later rebooted the machine, Splunk started on the indexer and then.... nothing - The cluster master showed still "Restarting" for over 10 minutes. So I decided to login to the web view of the indexer, navigated to the settings / peer stuff and about 20 seconds later the cluster master cluster-status showed it as "status UP" without changing anything. Some days later I did the same with another indexer and it was the same story - I needed to login to the web view in order to have the peer shown as UP in the cluster. Is there anything in the docs that I have missed? Is this normal? (How can I trust the self-healing capabilities of the cluster, if one needs to manually log in to the peer after a downtime?) Thanks a lot + kind regards, Triv
I have a single index which logs incoming request and completed request related details. There is a common indicator i.e. commandId . I want to fetch key parameters from each entry and then merge it ... See more...
I have a single index which logs incoming request and completed request related details. There is a common indicator i.e. commandId . I want to fetch key parameters from each entry and then merge it into a single table for dashboarding. Incoming request details (which will not have the keyword : numDCs): index="log-3258-prod-c" NOT numDCs| | table _time,contextId,user_name,Flow Completed request details (which will have keyword: numDCs) index="log-3258-prod-c" numDCs| fields contextId,contextIdUser,numDCs,productCount, clientIP,laas_hostname,flowId I need table having all columns in both and contextId as merging column There are chances that incoming request has still not completed i.e. they might still be executing so its values for Completed request columns should be null
I have a report with a table where I am showing uptime availability of various products.  Currently the table is returning only results that fall below 100%.  Makes sense overall but I need all the d... See more...
I have a report with a table where I am showing uptime availability of various products.  Currently the table is returning only results that fall below 100%.  Makes sense overall but I need all the data.  So I need results with no data to show as 100%.  For the life of me I can not figure it out.  Please all knowing Splunk gods help me.   index=my_data data.environment.application="MY APP" data.environment.environment="test" | eval estack="my_stack" | fillnull value="prod" estack data.environment.stack | where 'data.environment.stack'=estack | streamstats window=1 current=False global=False values(data.result) AS nextResult BY data.componentId | eval failureStart=if((nextResult="FAILURE" AND 'data.result'="SUCCESS"), "True", "False"), failureEnd=if((nextResult="SUCCESS" AND 'data.result'="FAILURE"), "True", "False") | transaction data.componentId, data.environment.application, data.environment.stack startswith="failureStart=True" endswith="failureEnd=True" maxpause=15m | stats sum(duration) as downtime by data.componentId | addinfo | eval uptime=(info_max_time - info_min_time)-downtime, avail=(uptime/(info_max_time - info_min_time))*100, downMins=round(downtime/60, 0) | rename data.componentId AS Component, avail AS Availability | table Component, Availability
我想使用 syslog-ng 將資料從通用轉寄器輸入到我的搜尋頭 我將使用 TCP,但我不知道哪裡出了問題,我無法在搜索頭中顯示我的數據 這是我的syslog-ng splunk.conf       template syslog { template("${DATE} ${HOST} ${MESSAGE}\n"); }; rewrite rewrite_str... See more...
我想使用 syslog-ng 將資料從通用轉寄器輸入到我的搜尋頭 我將使用 TCP,但我不知道哪裡出了問題,我無法在搜索頭中顯示我的數據 這是我的syslog-ng splunk.conf       template syslog { template("${DATE} ${HOST} ${MESSAGE}\n"); }; rewrite rewrite_stripping_priority { subst("^\<\\d+>", "", value(MESSAGE)); }; source src_udp_514 { udp(ip("0.0.0.0") so_rcvbuf(16777216) keep_timestamp(yes) flags(no-parse)); }; destination dest_tcp_10001 { tcp("127.0.0.1" port(10001) template("syslog")); }; filter f_linux_server { netmask(172.18.0.8/32) }; destination dest_tcp_10002 { tcp("127.0.0.1" port(10002) template("syslog")); }; filter f_linux_server2 { netmask(172.18.0.9/32) }; log { source(src_udp_514); rewrite(rewrite_stripping_priority); if (filter(f_linux_server)) { destination(dest_tcp_10001); } elif (filter(f_linux_server2)) { destination(dest_tcp_10002); }; };       i also already set tcp 10001 and 10002 on my universal forwarder      
Dear Splunk Community, I am currently creating a Splunk dashboard and would like to save user-defined filters in the dashboard, even after Splunk has been reopened.  Background: I have a table on a... See more...
Dear Splunk Community, I am currently creating a Splunk dashboard and would like to save user-defined filters in the dashboard, even after Splunk has been reopened.  Background: I have a table on a layer which data comes from SAP to Splunk via Push Extractor. Not all of the data displayed in the table is relevant, so I want to hide certain rows using a dropdown field/checkbox, which are then no longer included in the visualizations on the other layers. how can I ensure that these hidden rows are no longer included in the visualizations for all users of the dashboard? how can I ensure that the filter settings remain visible even after the dashboard has been closed for >24 hours? I really hope that someone can help me with this. Kind regards Julian
I have a query … index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<requestId>[^:]+)" | table requestId | dedup requestId …that returns 7 records/fields… 92d246dd-7aac-41f7-a398-27586062e4fa ba79... See more...
I have a query … index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<requestId>[^:]+)" | table requestId | dedup requestId …that returns 7 records/fields… 92d246dd-7aac-41f7-a398-27586062e4fa ba79c6f5-5452-4211-9b89-59d577adbc50 711b9bb4-b9f1-4a2b-ba56-f2b3a9cdf87c e227202a-0b0a-4cdf-9b11-3080b0ce280f 6099d5a3-61fc-418b-87b4-ddc57c482dd6 348fb576-0c36-4de9-a55a-97157b00a304 c34b7b96-094d-45bb-b03d-f9c98a4efd5f …that I then want to use as input for another search on the same index I looked at manual and can see that subsearches are allowed [About subsearches - Splunk Documentation] but when I add my subsearch as input … index=blah [search index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<requestId>[^:]+)" | table requestId | dedup requestId] ..I would have expected at least 7 records to have been returned BUT I do not see any output. There are no syntax issues so can someone explain to me what I’m not seeing/doing? Any help appreciated.
I'm using Splunk Cloud so cannot upload css and js files as a self service thing.  When I use  <dashboard version="1.1" script="table_icons_inline.js" stylesheet="table_decorations.css"> from wi... See more...
I'm using Splunk Cloud so cannot upload css and js files as a self service thing.  When I use  <dashboard version="1.1" script="table_icons_inline.js" stylesheet="table_decorations.css"> from within the Dashboard Examples app (simple_xml_examples) it works fine. My question is that I want to use those js and css files from Dashboard Example apps in other apps too. I believe there should be some way to add those files providing relative paths. Like: <dashboard version="1.1" script="apps/simple_xml_examples/appserver/static/table_icons_inline.js" stylesheet="apps/simple_xml_examples/appserver/static/table_decorations.css"> Any ideas??
The vector Splunk_hec_log [1] support compression algorithms gzip,snappy,zlib and zstd.   It seems the server splunk HEC  only supports gzip(I am using docker.io/splunk/splunk 9.2).  Does splunk HEC ... See more...
The vector Splunk_hec_log [1] support compression algorithms gzip,snappy,zlib and zstd.   It seems the server splunk HEC  only supports gzip(I am using docker.io/splunk/splunk 9.2).  Does splunk HEC  support snappy,zlib or zstd? Is this possible to enable this algorithms beside of gzip?     [1] https://vector.dev/docs/reference/configuration/sinks/splunk_hec_logs/#compression
Here is the index stanza:   [new_dex] homePath = volume:hotwarm/new_dex/db coldPath = volume:cold/new_dex/colddb thawedPath = $SPLUNK_DB/new_dex/thaweddb maxTotalDataSizeMB = 2355200 homePath.maxDa... See more...
Here is the index stanza:   [new_dex] homePath = volume:hotwarm/new_dex/db coldPath = volume:cold/new_dex/colddb thawedPath = $SPLUNK_DB/new_dex/thaweddb maxTotalDataSizeMB = 2355200 homePath.maxDataSizeMB = 2944000 maxWarmDBCount = 4294967295 // I know this is wrong, but need help setting it frozenTimePeriodInSecs = 15552000 maxDataSize = auto_high_volume repFactor=auto   Also, any other key = pair should be added. There are 18 indexers deployed each with a 16T of size. frozenTimePeriodInSecs has been met, but data is not being moved/deleted.           what configuration/details am I missing here? I needed data gone!
I have two timestamps in milliseconds: start=1710525600000, end=1710532800000. How can I search for logs between those timestamps? Let's say I want to run this query: index=my_app | search env=pr... See more...
I have two timestamps in milliseconds: start=1710525600000, end=1710532800000. How can I search for logs between those timestamps? Let's say I want to run this query: index=my_app | search env=production | search service=my-service How to specify the time range in millis for this query?