All Topics

Top

All Topics

Hey Community Need guidance with below scenario. A user will provide an IP address as input. I want that last two octets of the input IP should be compared with the last two octets of the Source IP... See more...
Hey Community Need guidance with below scenario. A user will provide an IP address as input. I want that last two octets of the input IP should be compared with the last two octets of the Source IP field and the matched results should be returned. For example  input_ip="1.2.3.4" src_ip="4.5.3.4" This should returns the results as the last two octets are matching. I have tried replace first two octets with * using regex and strcat however, it doesn't works for me.
Hi All, Please help me with the splunk alerts for below scenario   Thanks, Vijay Sri S
I have managed to pull together the following | mstats max(_value) prestats=true WHERE metric_name="df.used" span=1mon AND host IN (server1.fqdn,server2.fqdn,server3.fqdn, server4.fqdn) | timechart... See more...
I have managed to pull together the following | mstats max(_value) prestats=true WHERE metric_name="df.used" span=1mon AND host IN (server1.fqdn,server2.fqdn,server3.fqdn, server4.fqdn) | timechart max(_value) as "max" span=1mon by host I am struggling to work out how to add a column which shows percentage differences on the previous max value ideally, I want to produce something like the below any pointers greatly appreciated!???   Totals percentage change   percentage change   percentage change   percentage change   Server1   Server2   Server3   Server4   2021-12 66.0454   62.2212   58.0469   60.6775   2022-01 68.8615 4.26% 63.6594 2.31% 58.0931 0.08% 60.6775 0.00% 2022-02 68.0096 -1.24% 57.1727 -10.19% 58.3543 0.45% 60.6775 0.00% 2022-03 69.0297 1.50% 57.5982 0.74% 58.3765 0.04% 60.6775 0.00% 2022-04 74.4503 7.85% 56.7901 -1.40% 58.3883 0.02% 60.6775 0.00% 2022-05 79.0023 6.11% 54.415 -4.18% 58.2995 -0.15% 60.6775 0.00% 2022-06 84.5459 7.02% 54.5954 0.33% 58.3365 0.06% 60.6775 0.00% Average growth   4.25%   -2.06%   0.08%   0.00%
There is something wrong with the data output by using apendcols. The data of Total_Actual is blank from 02-2022. But actually there has data all of months. May I know what's the reason..? index=so... See more...
There is something wrong with the data output by using apendcols. The data of Total_Actual is blank from 02-2022. But actually there has data all of months. May I know what's the reason..? index=sourceA PRIORITY="High" OR PRIORITY="Medium" OR PRIORITY="Low" WAS_CRITICAL="yes" | eval _time=strptime(FIRST_SOLVED_DATE,"%Y-%m-%d %H:%M:%S.%N") | timechart span=1mon count as Total | appendcols [search index=sourceA PRIORITY="Critical" | eval _time=strptime(FIRST_SOLVED_DATE,"%Y-%m-%d %H:%M:%S.%N") | timechart span=1mon count as Total_Actual] | eval Rate_%=round((Total_Actual/Total)*100, 2) | table _time, Total, Total_Actual, Rate_% | tail 12 | sort _time OUTPUT _time Total Total_Actual Rate_% 2021-07-01T00:00:00.000+0200 76 64 84.21 2021-08-01T00:00:00.000+0200 74 51 68.92 2021-09-01T00:00:00.000+0200 81 45 55.56 2021-10-01T00:00:00.000+0200 75 71 94.67 2021-11-01T00:00:00.000+0200 118 58 49.15 2021-12-01T00:00:00.000+0200 101 105 103.96 2022-01-01T00:00:00.000+0200 81 86 106.17 2022-02-01T00:00:00.000+0200 95     2022-03-01T00:00:00.000+0200 85     2022-04-01T00:00:00.000+0200 96     2022-05-01T00:00:00.000+0200 106     2022-06-01T00:00:00.000+0200 141    
I found several questions and solutions about increasing the size (mostly the width) of an input box. The way users can give their input now is not very user friendly because it is 1 line of text (pa... See more...
I found several questions and solutions about increasing the size (mostly the width) of an input box. The way users can give their input now is not very user friendly because it is 1 line of text (partly disappearing depending on the width of the textbox).  Is there a way to change the height of the input box in a way that users can type their input so that the complete text is visible over several lines?
I am using time-range-type=BETWEEN_TIMES in the rest-api and pulling data of 15 days time period. i.e.,(from 14th june 2022 to 29th June 2022).  The response I am always getting with frequency  SIXT... See more...
I am using time-range-type=BETWEEN_TIMES in the rest-api and pulling data of 15 days time period. i.e.,(from 14th june 2022 to 29th June 2022).  The response I am always getting with frequency  SIXTY_MIN. Can this be Configurable? Or it's decided by the AppDynamics internally. Sample response <metric-data>         <metricId>160179877</metricId>         <metricName>BTM|Application Summary|Average Response Time (ms)</metricName>         <metricPath>Overall Application Performance|Average Response Time (ms)</metricPath>         <frequency>SIXTY_MIN</frequency>         <metricValues>        ...... I am looking to change that frequency to another value like 5 min or 10 min when querying for 15 days older data. It that can be configurable in the API. Could you please share the api here. Thanks & Regards, Rahul
Hello, I am running the following search via the API: search index=juniper sourcetype=juniper:junos:firewall "3389" | rex "(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/(?<src_port>\d{1,5})-\>(?... See more...
Hello, I am running the following search via the API: search index=juniper sourcetype=juniper:junos:firewall "3389" | rex "(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/(?<src_port>\d{1,5})-\>(?<dest_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/(?<dest_port>\d{1,5})" | eval src_ip_class=if((cidrmatch("192.168.0.0/24",src_ip) OR cidrmatch("172.16.0.0/12",src_ip) OR cidrmatch("10.0.0.0/8",src_ip)),"private","public") | where dest_port=3389 and src_ip_class="private" | head 5    I am trying to run it in two ways: With earliest=-90d in the first clause of the query (right after the "3389") With earliest_time=2022-01-31T00:00:00.000Z and latest_time=2022-06-28T23:59:59.999Z that I am passing via the API The first method returns many results while the second returns none.   When querying the search status, I am seeing a huge difference in the very first stage of the query: "command.search": { "duration_secs": 21.298, "invocations": 111, "input_count": 0, "output_count": 266474 }  vs "command.search": { "duration_secs": 2.329, "invocations": 52, "input_count": 0, "output_count": 98835 }   I have confirmed that the earliestTime and latestTime shown in the search status are correct. I have also checked for time skews between _time and indextime and have found none. Changing earliest_time to index_earliest also did not help. What can account for this difference?
Hi,   i'm installed splunk in linux server through aws ubuntu and unable to open the web browser.   Pls help  Updating Media  
Hi, I have a mixed version splunk deployment which involves one indexer of 8.2.1 and another of 7.3.1. There are also 3 Heavy Forwarders linked to one another to reach indexers. Here are the versions... See more...
Hi, I have a mixed version splunk deployment which involves one indexer of 8.2.1 and another of 7.3.1. There are also 3 Heavy Forwarders linked to one another to reach indexers. Here are the versions: Indexer 01 - 8.2.1 Indexer 02 - 7.3.1.1 2HFs - 7.3.1.1 1HF - 8.21. 1UF - 7.3.1 This is how the data from UF is forwarded to indexers. UF -> 7.3.1.1 HF -> 7.3.1.1 HF -> Indexer 02, UF-> 7.3.1.1 HF -> 7.3.1.1 HF -> 8.2.1 HF -> Indexer01 Both indexers can receive _internal logs from all UF and HFs, but only Indexer 02 (7.3.1.1) can receive main and other custom indexes. This is the concern.  I should be able to receive events from 7.3.1 UF in 8.2.1 Indexer according to this . It mentions 7.3.1 and 8.2.1 are compatible but limited support. What does it mean by limited support?  What I have tested so far is that, fully 7.3.1 environment and fully 8.2.1 environment can receive custom logs from UF, but the mixed one hasn't worked yet. Is there anything I must have missed out? Thank you and much appreciated!    
Command: rex mode=sed "s/\"name":\s\"[^\"]+\"/"name":"###############"/g" Regex seems to work fine in Regex101  However, I seem to continue to get this error: Error in 'SearchParser': Missing a s... See more...
Command: rex mode=sed "s/\"name":\s\"[^\"]+\"/"name":"###############"/g" Regex seems to work fine in Regex101  However, I seem to continue to get this error: Error in 'SearchParser': Missing a search command before '^'. Error at position '69' of search query rex mode=sed "s/\"c...{snipped} {errorcontext = n_id"\s\"[^\"]+\"/"co}'. I'm trying to mask a json key:value pair.  See below: "name": "john doe" ----> "name": "######"
6/29/22 4:58:14.526 PM   2022-06-29 17:58:14.526 [Task1] INFO Task1 - Published Task1 received  id 101   6/29/22 4:59:14.526 PM   2022-06-29 17:58:14.526 [Task1] INFO Task1 - P... See more...
6/29/22 4:58:14.526 PM   2022-06-29 17:58:14.526 [Task1] INFO Task1 - Published Task1 received  id 101   6/29/22 4:59:14.526 PM   2022-06-29 17:58:14.526 [Task1] INFO Task1 - Published Task1 done  id 101 I'm trying  to fetch time for both the events (when it is received and when the task is done)  and calculate the difference between them in form of table  I tried  index=source "Published Task 1" | rex "id" (?<ID>\d+)  | table  ID start_time End_time difference _time
Hi All! After upgrading to 8.1.10, data is not coming a rest source anymore? How am i able to check how the input configs were prior to the change? What could also cause this issue?
Hello, I have a user wanted to send the logs via HEC to Splunk cloud via HF.I created a token on HF and shared the token, index and HF end point, when the user is sending a test event with CURL, it... See more...
Hello, I have a user wanted to send the logs via HEC to Splunk cloud via HF.I created a token on HF and shared the token, index and HF end point, when the user is sending a test event with CURL, it is getting successful and i can see the event but when the user is trying to send via logstash, we are seeing java cert error. My question is whether user can output to regular http and not use ssl? Error Message: message=>"xxxx path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target" Looking for suggestions.   Thanks.
Greetings Community Experts I have a group of devices that each should report state to a portal every 10 seconds. If a device fails to report for 6 periods - one minute, I am categorizing the device... See more...
Greetings Community Experts I have a group of devices that each should report state to a portal every 10 seconds. If a device fails to report for 6 periods - one minute, I am categorizing the device as disconnected. The time period is a workday of 6:30 AM - 6:30 PM (12 hours / 720 minutes). I am trying to use the search results to generate a percentage of connected devices. The calculation fails in the last step. Requesting your assistance to develop a working search. Here is the search I am using. Thanks in advance! index=test earliest=-2d@d+6h+30m latest=-1d@d-5h-30m | bucket span=1m _time | stats count by _time, SerialNumber | eval state=if(count>=1, "Con", "Dcon") | stats count by SerialNumber, state | eval status=case(count=720, "Connected", count<720, "Disconnected") | stats count by status | eval Percent=round((Connected-Disconnected)/Connected*100, 2)."%"
Hello, I have a single deployment server on prem running on splunk enterprise version 8.0.5, i am planning to upgrade it to v9. Can someone help me with the upgradation steps?     Thanks
Hello, Is it possible to delete user created sourcetypes on splunk cloud, i checked under all configurations and sourcetypes options but didn't found anything. Anyone has an idea? I guess we need... See more...
Hello, Is it possible to delete user created sourcetypes on splunk cloud, i checked under all configurations and sourcetypes options but didn't found anything. Anyone has an idea? I guess we need to open a case to splunk support for deletion?     Thanks
  I have two columns per event I am trying to use. Well call these col1 and UknownRandomColumnName (urcn for short) . The key of urcn changes from event to event, but the value of col1 will... See more...
  I have two columns per event I am trying to use. Well call these col1 and UknownRandomColumnName (urcn for short) . The key of urcn changes from event to event, but the value of col1 will always be the key of urcn. How can I use the value of col1 as a key for the data id like to output from urcn in a search. Example data for my events may look like: ======================= |    col       |    urcn1    |    urcn2    | ====================== |    urcn1 |    Value    |                     | --------------------------------------- |    urcn2 |                    |     Value    | --------------------------------------
How can I extract a list of users?
I am trying to add data into Splunk in Json format. All the events have the same format. Lets say we have some format like this: [      field1 : value1      field2 : value2 ]   Is is poss... See more...
I am trying to add data into Splunk in Json format. All the events have the same format. Lets say we have some format like this: [      field1 : value1      field2 : value2 ]   Is is possible for me to update value1 to some value3, given field1? I am looking to first achieve this from website and if this possible, I am looking for REST APIs to achieve the same.