All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, My lead give some task .To create a table, we have lot of source type ... source type have the different states which means up and down.the source type is up we get one log msg , suppose so... See more...
Hi all, My lead give some task .To create a table, we have lot of source type ... source type have the different states which means up and down.the source type is up we get one log msg , suppose source type is down we get log each 5min once.....in one day we have more than 1also posible...now how I take the first down msg after up 
Hi Team, We are not able to see any custom created add-on using Add-on builder in Splunk HF. What is the issue?, how we can resolve this?
Gudde Muergen! I'm quite new to Splunk, so I'm having difficulties figuring out how to do this search properly. Here's a small snippet of events: mc1_date mc1_time mc1_system mc1_c... See more...
Gudde Muergen! I'm quite new to Splunk, so I'm having difficulties figuring out how to do this search properly. Here's a small snippet of events: mc1_date mc1_time mc1_system mc1_catalog mc1_adds mc1_updates mc1_gets mc1_getupd mc1_deletes 15.12.2022 08:05:05 SYSS1 CATALOG.MASTER.SYSS1 0 0 5081 0 0 14.12.2022 08:05:16 SYSS1 CATALOG.MASTER.SYSS1 0 0 5012 0 0 13.12.2022 10:05:12 SYSS1 CATALOG.MASTER.SYSS1 0 0 6719 0 0 12.12.2022 08:05:12 SYSS1 CATALOG.MASTER.SYSS1 0 0 5051 0 0 11.12.2022 08:05:03 SYSS1 CATALOG.MASTER.SYSS1 0 0 5008 0 0 10.12.2022 08:05:08 SYSS1 CATALOG.MASTER.SYSS1 0 0 5012 0 0 09.12.2022 14:05:16 SYSS1 CATALOG.MASTER.SYSS1 0 0 11387 0 0   The table above contains the max daily mc1_gets values for CATALOG.MASTER.SYSS1 on SYSS1 from the last 7 days. The whole sourcetype contains hourly data with multiple systems and multiple catalogs per system. What I need is a way to get, per catalog, per system, the standard deviation of the daily max values of mc1_gets over a span of 7 days (or more). The output data for the table above should look something like this in the end: mc1_system mc1_catalog mc1_gets SYSS1 CATALOG.MASTER.SYSS1 2380.05   Any help would be much appreciated! Mat beschte Gréiss, Duncan Hagen
Hi All,    Can anyone help me to get the query for short lived account with the condition of user create and delete the account on active directory within 10 minutes… I don’t need the logs of use... See more...
Hi All,    Can anyone help me to get the query for short lived account with the condition of user create and delete the account on active directory within 10 minutes… I don’t need the logs of user creation and deletion Thanks in advance.  
I performing the chart command for the below kind of table.    Command : [|Chart  values(course) as course  over ID by status]     Received Output as BELOW:      Expected Ou... See more...
I performing the chart command for the below kind of table.    Command : [|Chart  values(course) as course  over ID by status]     Received Output as BELOW:      Expected Output :  Kindly help to resolve this . I have tried |MVExpand  status also . . But it is picking only the first value and providing wrong output .    
Hello there! I am working on a test environment where I only have one Splunk instance. I have succeeded to have a secure Splunk web with ssl.  I have the following problem:   Here are my c... See more...
Hello there! I am working on a test environment where I only have one Splunk instance. I have succeeded to have a secure Splunk web with ssl.  I have the following problem:   Here are my config files: web.conf [settings] enableSplunkWebSSL = true privKeyPath = <path to key> serverCert = <path to certificate>   Server.conf [sslConfig] sslPassword = password sslVerifyServerCert = True sslVerifyServerName = True serverCert = <path to certificate> cliVerifyServerName = true sslRootCAPath = <path to CA certificate> [kvstore] serverCert = <path to certificate> sslPassword = password sslVerifyServerCert = True sslVerifyServerName = True [pythonSslClientConfig] sslVerifyServerCert = true sslVerifyServerName = true   splunk-launch.conf PYTHONHTTPSVERIFY = 1 SPLUNK_FIPS=1   I know that the configuration for securing the environment with TLS has changed since the 9.0 version of Splunk enterprise. My CLI doesn't display any warning or error. I have followed everything suggested in these links:  Security updates - Splunk Documentation Configure TLS certificate host name validation - Splunk Documentation Configure Splunk Web to use TLS certificates - Splunk Documentation   Any help would be appreciated ! Regards
Hi, I am trying to upload data to Splunk with the help of a python script. I am getting a 401(unauthorized) error on running the code. But I provided the valid user credentials which I am using for l... See more...
Hi, I am trying to upload data to Splunk with the help of a python script. I am getting a 401(unauthorized) error on running the code. But I provided the valid user credentials which I am using for logging into Splunk Enterprise. Can you help in figuring out the reason for this error? Here is a copy of the error occurred.   Traceback (most recent call last): File "E:\", line 912, in login response = self.http.post( File "E:\", line 1273, in post return self.request(url, message) File "E:\", line 1302, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- Login failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "E:\", line 263, in <module> server = splunklib.client.connect(host=ARGS.splunk, username='', password='') File "E:\", line 345, in connect s.login() File "E:\", line 925, in login raise AuthenticationError("Login failed.", he) splunklib.binding.AuthenticationError: Login failed.
メインサーチのイベントの_timeをサブサーチに渡したいのですが、上手くいきません。 何か方法はありますでしょうか。   index=event_data |eval earlytime=_time-60 latesttime=_time+60 |fields earlytime,latesttime [ |search index=event_data2 earliest=ear... See more...
メインサーチのイベントの_timeをサブサーチに渡したいのですが、上手くいきません。 何か方法はありますでしょうか。   index=event_data |eval earlytime=_time-60 latesttime=_time+60 |fields earlytime,latesttime [ |search index=event_data2 earliest=earlytime latest=latesttime |return event_host,event_user ] |table event_host,event_user   ご助力お願いします。
I am moving Splunk 6.6.1 to anther empty server. Because I cannot find Splunk 6.6.1 install package I moved splunk home directly to the new server. I edited /opt/splunk/etc/system/local/web.conf an... See more...
I am moving Splunk 6.6.1 to anther empty server. Because I cannot find Splunk 6.6.1 install package I moved splunk home directly to the new server. I edited /opt/splunk/etc/system/local/web.conf and inputs.conf using new host name. I also edited /etc/hosts make it like 127.0.0.1 new host name localhost. And when I start splunk I got below mesages: ------- Checking prerequisites... Checking http port [80]: open Checking mgmt port [8089]:open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes...      Validated: XXXX,YYYY Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files and edits... Validating installed files against hashes from '/opt/splunk/splunk-6.6.1-aeae3fe0c5af-linux-2.6-x86_64-manifest' All installed file intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... Done Waiting for web server at https://127.0.0.1:80 to be available..  ← This never become available. ------- What did I miss here? I already confiremed related post in the commnity and get no luck. Please help me with this error. Any help will be very appreciated.
Hi Team, Environment 1 - Search Head, 2-Indexers, 1 - Deployment Server, 1 - Heavy Forwarder, 1 -Cluster Master Problem Statement 1)I am unable to retrieve events when searching with index=* ... See more...
Hi Team, Environment 1 - Search Head, 2-Indexers, 1 - Deployment Server, 1 - Heavy Forwarder, 1 -Cluster Master Problem Statement 1)I am unable to retrieve events when searching with index=*    2) When checked with connectives all were connected (SH --> Indexers --> CM --> HF --> DS) When checked with internal index showing 401 client is not authenticated. When checked from backend there is no error showing in splunkd.log    
We have already dashboard in splunk cloud platform. I want to trigger external script from dashboard panel. Once I click the submit button, script should be executed and should display the output in ... See more...
We have already dashboard in splunk cloud platform. I want to trigger external script from dashboard panel. Once I click the submit button, script should be executed and should display the output in dashboard panel. It is to automate some day-to-day activities to stop manual interventions. Ex. if dashboard panel shows any application error,then we should restart the application by external script. Please let me know, can we do this from splunk dashboard.  
Hi  When i'm searching the top users who logged into a host, I'm getting event data along with the user when i'm using pipe. ex: sourcetype="hostname" "authentication success" | top limit=50 User... See more...
Hi  When i'm searching the top users who logged into a host, I'm getting event data along with the user when i'm using pipe. ex: sourcetype="hostname" "authentication success" | top limit=50 User   Can someone help with this issue?
Hi guys, I have configured radware DDOS app into splunk, I want gather the total amount of traffic from the DDOS app in splunk ( the traffic seems like an attack ) in GB. the sample query lik... See more...
Hi guys, I have configured radware DDOS app into splunk, I want gather the total amount of traffic from the DDOS app in splunk ( the traffic seems like an attack ) in GB. the sample query like this. index="security" sourcetype=DefensePro action="*" policy=* | 'Top_attack_types(*)' how do I come up with this.
Hi, A customer I am dealing with has a hybrid setup (UF, HF, DS on-prem) and the Rest of Infra in Splunk Cloud. There are  2800+ Universal Forwarders in  a missing status. These were operational, h... See more...
Hi, A customer I am dealing with has a hybrid setup (UF, HF, DS on-prem) and the Rest of Infra in Splunk Cloud. There are  2800+ Universal Forwarders in  a missing status. These were operational, however filtering was not setup correctly, so they blew the 150GB limit on Splunk Cloud. They decided to run an SCCM deployment to delete to the CONF files in UF configuration. Now, a re-install on the agent and trying to apply HF Config is not changing these statuses. Would a rebuild forwarder assets period set to 24 delete all HF with missing status and will these be discovered again?  https://community.splunk.com/t5/Getting-Data-In/How-far-back-can-be-go-when-rebuilding-the-forwarders-assets/m-p/249196 Or - Do we need to do a completed uninstall of UF package in SCCM, then re-deploy 9.0.2 with CONF files. Thanks, Stuart  
Good day, I am working on a Splunk project, end to end from log ingestion to creating searcheads and dashboards. I need sample logs for SalesForce and Cisco Secure Email. Nothing with sensitive inf... See more...
Good day, I am working on a Splunk project, end to end from log ingestion to creating searcheads and dashboards. I need sample logs for SalesForce and Cisco Secure Email. Nothing with sensitive information, something old I can work with. Where can I find sample logs for appliances and other applications for use in Splunk? I learnt there are some great websites out there.  I might need to use specific TA's but will deal with that later, just need to get my hands on sample logs and create a syslog server and all other components.  Thanks!
I am unable to push shcluster bundles post an upgrade to 9.0.2 from 8.2.7. I have also completed the upgrade and migrated the KVstore without error and see the following expected settings: server... See more...
I am unable to push shcluster bundles post an upgrade to 9.0.2 from 8.2.7. I have also completed the upgrade and migrated the KVstore without error and see the following expected settings: serverVersion : 4.2.17 storageEngine : wiredTiger   The error I receive is: "Error in pre-deploy check, uri=https://<HOST_NAME>/services/shcluster/captain/kvstore-upgrade/status, status=502, error=No error" If I look in splunkd.log I get the following error for each attempt. HttpClientRequest [2071959 TcpChannelThread] - Caught exception while parsing HTTP reply: Unexpected character while looking for value: '<' The error from the actual command makes me think that there was an issue with the kvstore-upgrade that is just not showing.
Salut vous allez bien j esper alors j'aimerai avoir des conseils ou des uggestion pour un projet qui porte sur la mise en place d'un NOC pour un reseaux d'operateurs merci Hi you are well i hope ... See more...
Salut vous allez bien j esper alors j'aimerai avoir des conseils ou des uggestion pour un projet qui porte sur la mise en place d'un NOC pour un reseaux d'operateurs merci Hi you are well i hope then i would like to have advice or suggestions for a project that focuses on setting up a NOC for a network of operators Thank you
Hi,  We're preparing to upgrade SE from 8 to 9 and have a question about this requirement: For distributed deployments of any kind, confirm that all machines in the indexing tier satisfy the follow... See more...
Hi,  We're preparing to upgrade SE from 8 to 9 and have a question about this requirement: For distributed deployments of any kind, confirm that all machines in the indexing tier satisfy the following conditions:  ... ... They do not run their own saved searches If our indexers are also search heads, would that violate this?
Hello Splunk Experts, Our organization has multiple applications. A work item, such as an order, passes through various applications and the actions performed on this work item are logged. Differen... See more...
Hello Splunk Experts, Our organization has multiple applications. A work item, such as an order, passes through various applications and the actions performed on this work item are logged. Different apps have different log formats. Here's what I am trying to do with my dashboard. When a user enters a work item # in the dashboard input, it will show the "journey" of that work item as it is processed by each app and passed on. I have panels on the dashboard to indicate the log entry of when it was received, processed and the passed on to the next app in the chain. Now, I am trying to get a bit more creative. In addition to the panels on the dashboard, I am planning to have a label on the dashboard with a story-template such as --- "An order with item placed by <username extracted from first or nth search result of app1> with <item # from input> arrived for processing at <time from first or nth search result of app1>. Then it was passed on to app2 at <time from first or nth search result of app 2>.  <if there is any error then> The item encountered error in app2. Error is <error extracted from search result of app2>, etc. Please contact blah blah --- So the idea here is to generate a human-readable "story", i.e. a text generated based on search results of each panel, so that someone looking at the dashboard does not have to examine multiple panels to understand what is going on. They can simply read this "story". I am able to get the resultCount using <progress> and <condition> tags in the dashboard, but do not know how to fetch and examine first or nth search result, or look for some specific text such as error or the time for nth result within the search results displayed in the panel for a particular app. Any hints or specific examples appreciated. Thanks much!
I have an access logs which prints like this server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1?sc=RT&lo=en_US HTTP/1.1" 200 350 85 which rex is  | rex field=_raw "(?<SRC>\d+\... See more...
I have an access logs which prints like this server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1?sc=RT&lo=en_US HTTP/1.1" 200 350 85 which rex is  | rex field=_raw "(?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<uri_path>\S+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+)" Is there a way to seperate uri into two or 3?  /google/page1/page1a/633243463476/googlep1?sc=RT&lo=en_US  TO  /google /page1/page1a/633243463476/googlep1?sc=RT&lo=en_US  OR /google /page1/page1a/633243463476/googlep1  ?sc=RT&lo=en_US