All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

we have some devices for Power Distribution Units and UPS"s for DC team planning to ingest into splunk to monitor is anyway to monitor those data in splunk
I have a modular input, which accepts AWS credentials while configuring input for the addon. In that secret key field is password type field. So after inputs are saved into input.conf, it creates a e... See more...
I have a modular input, which accepts AWS credentials while configuring input for the addon. In that secret key field is password type field. So after inputs are saved into input.conf, it creates a encrypted data for secret key and stored in password.conf.  this code to get decrypted data while processing events was working fine for the addon.       helper.get_arg('access_key')       but After the addon upgrade using addon builder V-4.X.X the same code  returning ***** instead of actual value. what might be the issue? is something needs to do before upgrading? or is there any other ways to get decrypted data from password.conf file?
Hello,  I have 2 dashboards, the first for display reference with the error, the second to search specific reference and display all steps of work (there is a time selector and reference selector o... See more...
Hello,  I have 2 dashboards, the first for display reference with the error, the second to search specific reference and display all steps of work (there is a time selector and reference selector one the second dashbord, the reference selector feeds a variable used on all elements of the dashboard) I would like to add a link in my first dashboard to push to the second with an automatic filling.  Does is it possible ?  Could you help me please ? 
Hi folks, We use the latest cluster agent and auto-instrument for .net core we have: docker.io/appdynamics/dotnet-core-agent:latest APPDYNAMICS_AGENT_REUSE_NODE_NAME: "true" APPDYNAMICS_AGENT_R... See more...
Hi folks, We use the latest cluster agent and auto-instrument for .net core we have: docker.io/appdynamics/dotnet-core-agent:latest APPDYNAMICS_AGENT_REUSE_NODE_NAME: "true" APPDYNAMICS_AGENT_REUSE_NODE_NAME_PREFIX: node but for node.js agent: docker.io/appdynamics/nodejs-agent:22.5.0-14-stretch-slim it doesn't work. Each restart agent creates a new node with increment +1 in name, how to fix it? ^Post edited by @Ryan.Paredez for minor formatting changes.
Hi All, I am trying to create a table out of the log below: log: ServerA ServerB ServerC ADFILES41-6.2-4 not_available ADFILES41-6.2-4.2 ADM41-5.10.1-4 ADM41-5.10.1-4 ADM41-5.10.1-4 ADM41HF-5.1... See more...
Hi All, I am trying to create a table out of the log below: log: ServerA ServerB ServerC ADFILES41-6.2-4 not_available ADFILES41-6.2-4.2 ADM41-5.10.1-4 ADM41-5.10.1-4 ADM41-5.10.1-4 ADM41HF-5.10.1HF004-4 ADM41HF-5.10.1HF004-4 ADM41HF-5.10.1HF004-4 ADM42-5.11-4 ADM42-5.11-4 ADM42-5.11-4 ADM42HF-5.11HF03-4 ADM42HF-5.11HF03-4 not_available TRA42-5.11-4 TRA42-5.11-4 not_available not_available ADFILES42-6.2-4 not_available not_available not_available TRA42-5.13-4 Here you can see that the 1st line gives the server names. 2nd, 3rd,4th and so on lines are applications available in the server. For eg. From 2nd line you can see that the application ADFILES41-6.2-4 is available in A&C but not in B. Similarly from 9th line you can see that the application TRA42-5.13-4 is available in C but not in A&B. So the requirement is to create a table in the below way to show if any servers is missing any application. Server ServerA ServerB ServerC Application ADFILES41-6.2-4 not_available ADFILES41-6.2-4 Application ADM41-5.10.1-4 ADM41-5.10.1-4 ADM41-5.10.1-4 Application ADM41HF-5.10.1HF004-4 ADM41HF-5.10.1HF004-4 ADM41HF-5.10.1HF004-4 Application ADM42-5.11-4 ADM42-5.11-4 ADM42-5.11-4 Application ADM42HF-5.11HF03-4 ADM42HF-5.11HF03-4 not_available Application TRA42-5.11-4 TRA42-5.11-4 not_available Application not_available ADFILES42-6.2-4 not_available Application not_available not_available TRA42-5.13-4   Please help me to create a query to get the table in the desired manner. Any help on the problem would be highly appreciated. Thank you All..!!
  (Search head cluster/indexer cluster environment) I have written a custom search, using the template provided by Splunk for streaming commands. In an attempt to force the search to run on the... See more...
  (Search head cluster/indexer cluster environment) I have written a custom search, using the template provided by Splunk for streaming commands. In an attempt to force the search to run on the search heads and not on the indexers, I added the @Configuration(local=True) bit to the code.       from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators @Configuration(local=True) class StreamingCSC(StreamingCommand):       I got that change from here: https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/pythonclassescustom/ but the search still dies. If i modify my search to put the |sort something | mycustomcommand, the search is forced to run locally and it works fine. What am I doing wrong in trying to keep this search off the indexers and only on the search head cluster?
Hi, I would like to return the rex "field" from a subquery so I can print it out. How do I do that? index=... "some text" | sort - _time [search message | rex "\[(?<number>\d{3,5})" | rex "(?<field>... See more...
Hi, I would like to return the rex "field" from a subquery so I can print it out. How do I do that? index=... "some text" | sort - _time [search message | rex "\[(?<number>\d{3,5})" | rex "(?<field>\w{2,4}@\d{1,4})" | return field] | dedup number | table _time number field In the result table the column field is always empty. Thanks for help!
Hi, I'm using splunk web to check some searches/alerts: 1. | rest /servicesNS/-/-/saved/searches/ splunk_server=local | table title <-- displays a list of saved searches then I pick one from... See more...
Hi, I'm using splunk web to check some searches/alerts: 1. | rest /servicesNS/-/-/saved/searches/ splunk_server=local | table title <-- displays a list of saved searches then I pick one from the list and launch: 2. rest /servicesNS/-/-/saved/searches/alert_without_white_spaces splunk_server=local. <-- and it works But when querying for a differently named alert I get an error: 3. rest /servicesNS/-/-/saved/searches/alert with white spaces splunk_server=local. <-- does not work - error message: Error in 'rest' command: Invalid argument: '-' 3a) rest /servicesNS/-/-/saved/searches/'alert with white spaces' splunk_server=local.   <-- does not work - error message: Error in 'rest' command: Invalid argument: '-' 3b) rest /servicesNS/-/-/saved/searches/"alert with white spaces" splunk_server=local.   <-- does not work - error message: Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/servicesNS/-/-/saved/searches/alert with white spaces?count=0 from server=https://127.0.0.1:8089 - Not Found 3d) rest /servicesNS/-/-/saved/searches/alert\ with\ white\ spaces splunk_server=local.  - error message: Error in 'rest' command: Invalid argument: '-\' 3e) | eval alert1="alert with white spaces"          | rest /servicesNS/-/-/saved/searches/alert1 - error message (splunk didn't use the variable value but the variable name) Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/servicesNS/-/-/saved/searches/alert1?count=0 from server=https://127.0.0.1:8089 - Not Found Is there a way to use variables or to query for a search name containing white spaces without getting an error ?
Hi Team, I'm using Splunk cloud  REST API "/services/collector/event"  used to post the data to Splunk cloud .what is the Get API for fetch the data ?
Hey Community Need guidance with below scenario. A user will provide an IP address as input. I want that last two octets of the input IP should be compared with the last two octets of the Source IP... See more...
Hey Community Need guidance with below scenario. A user will provide an IP address as input. I want that last two octets of the input IP should be compared with the last two octets of the Source IP field and the matched results should be returned. For example  input_ip="1.2.3.4" src_ip="4.5.3.4" This should returns the results as the last two octets are matching. I have tried replace first two octets with * using regex and strcat however, it doesn't works for me.
Hi All, Please help me with the splunk alerts for below scenario   Thanks, Vijay Sri S
I have managed to pull together the following | mstats max(_value) prestats=true WHERE metric_name="df.used" span=1mon AND host IN (server1.fqdn,server2.fqdn,server3.fqdn, server4.fqdn) | timechart... See more...
I have managed to pull together the following | mstats max(_value) prestats=true WHERE metric_name="df.used" span=1mon AND host IN (server1.fqdn,server2.fqdn,server3.fqdn, server4.fqdn) | timechart max(_value) as "max" span=1mon by host I am struggling to work out how to add a column which shows percentage differences on the previous max value ideally, I want to produce something like the below any pointers greatly appreciated!???   Totals percentage change   percentage change   percentage change   percentage change   Server1   Server2   Server3   Server4   2021-12 66.0454   62.2212   58.0469   60.6775   2022-01 68.8615 4.26% 63.6594 2.31% 58.0931 0.08% 60.6775 0.00% 2022-02 68.0096 -1.24% 57.1727 -10.19% 58.3543 0.45% 60.6775 0.00% 2022-03 69.0297 1.50% 57.5982 0.74% 58.3765 0.04% 60.6775 0.00% 2022-04 74.4503 7.85% 56.7901 -1.40% 58.3883 0.02% 60.6775 0.00% 2022-05 79.0023 6.11% 54.415 -4.18% 58.2995 -0.15% 60.6775 0.00% 2022-06 84.5459 7.02% 54.5954 0.33% 58.3365 0.06% 60.6775 0.00% Average growth   4.25%   -2.06%   0.08%   0.00%
There is something wrong with the data output by using apendcols. The data of Total_Actual is blank from 02-2022. But actually there has data all of months. May I know what's the reason..? index=so... See more...
There is something wrong with the data output by using apendcols. The data of Total_Actual is blank from 02-2022. But actually there has data all of months. May I know what's the reason..? index=sourceA PRIORITY="High" OR PRIORITY="Medium" OR PRIORITY="Low" WAS_CRITICAL="yes" | eval _time=strptime(FIRST_SOLVED_DATE,"%Y-%m-%d %H:%M:%S.%N") | timechart span=1mon count as Total | appendcols [search index=sourceA PRIORITY="Critical" | eval _time=strptime(FIRST_SOLVED_DATE,"%Y-%m-%d %H:%M:%S.%N") | timechart span=1mon count as Total_Actual] | eval Rate_%=round((Total_Actual/Total)*100, 2) | table _time, Total, Total_Actual, Rate_% | tail 12 | sort _time OUTPUT _time Total Total_Actual Rate_% 2021-07-01T00:00:00.000+0200 76 64 84.21 2021-08-01T00:00:00.000+0200 74 51 68.92 2021-09-01T00:00:00.000+0200 81 45 55.56 2021-10-01T00:00:00.000+0200 75 71 94.67 2021-11-01T00:00:00.000+0200 118 58 49.15 2021-12-01T00:00:00.000+0200 101 105 103.96 2022-01-01T00:00:00.000+0200 81 86 106.17 2022-02-01T00:00:00.000+0200 95     2022-03-01T00:00:00.000+0200 85     2022-04-01T00:00:00.000+0200 96     2022-05-01T00:00:00.000+0200 106     2022-06-01T00:00:00.000+0200 141    
I found several questions and solutions about increasing the size (mostly the width) of an input box. The way users can give their input now is not very user friendly because it is 1 line of text (pa... See more...
I found several questions and solutions about increasing the size (mostly the width) of an input box. The way users can give their input now is not very user friendly because it is 1 line of text (partly disappearing depending on the width of the textbox).  Is there a way to change the height of the input box in a way that users can type their input so that the complete text is visible over several lines?
I am using time-range-type=BETWEEN_TIMES in the rest-api and pulling data of 15 days time period. i.e.,(from 14th june 2022 to 29th June 2022).  The response I am always getting with frequency  SIXT... See more...
I am using time-range-type=BETWEEN_TIMES in the rest-api and pulling data of 15 days time period. i.e.,(from 14th june 2022 to 29th June 2022).  The response I am always getting with frequency  SIXTY_MIN. Can this be Configurable? Or it's decided by the AppDynamics internally. Sample response <metric-data>         <metricId>160179877</metricId>         <metricName>BTM|Application Summary|Average Response Time (ms)</metricName>         <metricPath>Overall Application Performance|Average Response Time (ms)</metricPath>         <frequency>SIXTY_MIN</frequency>         <metricValues>        ...... I am looking to change that frequency to another value like 5 min or 10 min when querying for 15 days older data. It that can be configurable in the API. Could you please share the api here. Thanks & Regards, Rahul
Hello, I am running the following search via the API: search index=juniper sourcetype=juniper:junos:firewall "3389" | rex "(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/(?<src_port>\d{1,5})-\>(?... See more...
Hello, I am running the following search via the API: search index=juniper sourcetype=juniper:junos:firewall "3389" | rex "(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/(?<src_port>\d{1,5})-\>(?<dest_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/(?<dest_port>\d{1,5})" | eval src_ip_class=if((cidrmatch("192.168.0.0/24",src_ip) OR cidrmatch("172.16.0.0/12",src_ip) OR cidrmatch("10.0.0.0/8",src_ip)),"private","public") | where dest_port=3389 and src_ip_class="private" | head 5    I am trying to run it in two ways: With earliest=-90d in the first clause of the query (right after the "3389") With earliest_time=2022-01-31T00:00:00.000Z and latest_time=2022-06-28T23:59:59.999Z that I am passing via the API The first method returns many results while the second returns none.   When querying the search status, I am seeing a huge difference in the very first stage of the query: "command.search": { "duration_secs": 21.298, "invocations": 111, "input_count": 0, "output_count": 266474 }  vs "command.search": { "duration_secs": 2.329, "invocations": 52, "input_count": 0, "output_count": 98835 }   I have confirmed that the earliestTime and latestTime shown in the search status are correct. I have also checked for time skews between _time and indextime and have found none. Changing earliest_time to index_earliest also did not help. What can account for this difference?
Hi,   i'm installed splunk in linux server through aws ubuntu and unable to open the web browser.   Pls help  Updating Media  
Hi, I have a mixed version splunk deployment which involves one indexer of 8.2.1 and another of 7.3.1. There are also 3 Heavy Forwarders linked to one another to reach indexers. Here are the versions... See more...
Hi, I have a mixed version splunk deployment which involves one indexer of 8.2.1 and another of 7.3.1. There are also 3 Heavy Forwarders linked to one another to reach indexers. Here are the versions: Indexer 01 - 8.2.1 Indexer 02 - 7.3.1.1 2HFs - 7.3.1.1 1HF - 8.21. 1UF - 7.3.1 This is how the data from UF is forwarded to indexers. UF -> 7.3.1.1 HF -> 7.3.1.1 HF -> Indexer 02, UF-> 7.3.1.1 HF -> 7.3.1.1 HF -> 8.2.1 HF -> Indexer01 Both indexers can receive _internal logs from all UF and HFs, but only Indexer 02 (7.3.1.1) can receive main and other custom indexes. This is the concern.  I should be able to receive events from 7.3.1 UF in 8.2.1 Indexer according to this . It mentions 7.3.1 and 8.2.1 are compatible but limited support. What does it mean by limited support?  What I have tested so far is that, fully 7.3.1 environment and fully 8.2.1 environment can receive custom logs from UF, but the mixed one hasn't worked yet. Is there anything I must have missed out? Thank you and much appreciated!    
Command: rex mode=sed "s/\"name":\s\"[^\"]+\"/"name":"###############"/g" Regex seems to work fine in Regex101  However, I seem to continue to get this error: Error in 'SearchParser': Missing a s... See more...
Command: rex mode=sed "s/\"name":\s\"[^\"]+\"/"name":"###############"/g" Regex seems to work fine in Regex101  However, I seem to continue to get this error: Error in 'SearchParser': Missing a search command before '^'. Error at position '69' of search query rex mode=sed "s/\"c...{snipped} {errorcontext = n_id"\s\"[^\"]+\"/"co}'. I'm trying to mask a json key:value pair.  See below: "name": "john doe" ----> "name": "######"
6/29/22 4:58:14.526 PM   2022-06-29 17:58:14.526 [Task1] INFO Task1 - Published Task1 received  id 101   6/29/22 4:59:14.526 PM   2022-06-29 17:58:14.526 [Task1] INFO Task1 - P... See more...
6/29/22 4:58:14.526 PM   2022-06-29 17:58:14.526 [Task1] INFO Task1 - Published Task1 received  id 101   6/29/22 4:59:14.526 PM   2022-06-29 17:58:14.526 [Task1] INFO Task1 - Published Task1 done  id 101 I'm trying  to fetch time for both the events (when it is received and when the task is done)  and calculate the difference between them in form of table  I tried  index=source "Published Task 1" | rex "id" (?<ID>\d+)  | table  ID start_time End_time difference _time