All Topics

Top

All Topics

メインサーチのイベントの_timeをサブサーチに渡したいのですが、上手くいきません。 何か方法はありますでしょうか。   index=event_data |eval earlytime=_time-60 latesttime=_time+60 |fields earlytime,latesttime [ |search index=event_data2 earliest=ear... See more...
メインサーチのイベントの_timeをサブサーチに渡したいのですが、上手くいきません。 何か方法はありますでしょうか。   index=event_data |eval earlytime=_time-60 latesttime=_time+60 |fields earlytime,latesttime [ |search index=event_data2 earliest=earlytime latest=latesttime |return event_host,event_user ] |table event_host,event_user   ご助力お願いします。
I am moving Splunk 6.6.1 to anther empty server. Because I cannot find Splunk 6.6.1 install package I moved splunk home directly to the new server. I edited /opt/splunk/etc/system/local/web.conf an... See more...
I am moving Splunk 6.6.1 to anther empty server. Because I cannot find Splunk 6.6.1 install package I moved splunk home directly to the new server. I edited /opt/splunk/etc/system/local/web.conf and inputs.conf using new host name. I also edited /etc/hosts make it like 127.0.0.1 new host name localhost. And when I start splunk I got below mesages: ------- Checking prerequisites... Checking http port [80]: open Checking mgmt port [8089]:open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes...      Validated: XXXX,YYYY Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files and edits... Validating installed files against hashes from '/opt/splunk/splunk-6.6.1-aeae3fe0c5af-linux-2.6-x86_64-manifest' All installed file intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... Done Waiting for web server at https://127.0.0.1:80 to be available..  ← This never become available. ------- What did I miss here? I already confiremed related post in the commnity and get no luck. Please help me with this error. Any help will be very appreciated.
Hi Team, Environment 1 - Search Head, 2-Indexers, 1 - Deployment Server, 1 - Heavy Forwarder, 1 -Cluster Master Problem Statement 1)I am unable to retrieve events when searching with index=* ... See more...
Hi Team, Environment 1 - Search Head, 2-Indexers, 1 - Deployment Server, 1 - Heavy Forwarder, 1 -Cluster Master Problem Statement 1)I am unable to retrieve events when searching with index=*    2) When checked with connectives all were connected (SH --> Indexers --> CM --> HF --> DS) When checked with internal index showing 401 client is not authenticated. When checked from backend there is no error showing in splunkd.log    
We have already dashboard in splunk cloud platform. I want to trigger external script from dashboard panel. Once I click the submit button, script should be executed and should display the output in ... See more...
We have already dashboard in splunk cloud platform. I want to trigger external script from dashboard panel. Once I click the submit button, script should be executed and should display the output in dashboard panel. It is to automate some day-to-day activities to stop manual interventions. Ex. if dashboard panel shows any application error,then we should restart the application by external script. Please let me know, can we do this from splunk dashboard.  
Hi  When i'm searching the top users who logged into a host, I'm getting event data along with the user when i'm using pipe. ex: sourcetype="hostname" "authentication success" | top limit=50 User... See more...
Hi  When i'm searching the top users who logged into a host, I'm getting event data along with the user when i'm using pipe. ex: sourcetype="hostname" "authentication success" | top limit=50 User   Can someone help with this issue?
Hi guys, I have configured radware DDOS app into splunk, I want gather the total amount of traffic from the DDOS app in splunk ( the traffic seems like an attack ) in GB. the sample query lik... See more...
Hi guys, I have configured radware DDOS app into splunk, I want gather the total amount of traffic from the DDOS app in splunk ( the traffic seems like an attack ) in GB. the sample query like this. index="security" sourcetype=DefensePro action="*" policy=* | 'Top_attack_types(*)' how do I come up with this.
Hi, A customer I am dealing with has a hybrid setup (UF, HF, DS on-prem) and the Rest of Infra in Splunk Cloud. There are  2800+ Universal Forwarders in  a missing status. These were operational, h... See more...
Hi, A customer I am dealing with has a hybrid setup (UF, HF, DS on-prem) and the Rest of Infra in Splunk Cloud. There are  2800+ Universal Forwarders in  a missing status. These were operational, however filtering was not setup correctly, so they blew the 150GB limit on Splunk Cloud. They decided to run an SCCM deployment to delete to the CONF files in UF configuration. Now, a re-install on the agent and trying to apply HF Config is not changing these statuses. Would a rebuild forwarder assets period set to 24 delete all HF with missing status and will these be discovered again?  https://community.splunk.com/t5/Getting-Data-In/How-far-back-can-be-go-when-rebuilding-the-forwarders-assets/m-p/249196 Or - Do we need to do a completed uninstall of UF package in SCCM, then re-deploy 9.0.2 with CONF files. Thanks, Stuart  
Good day, I am working on a Splunk project, end to end from log ingestion to creating searcheads and dashboards. I need sample logs for SalesForce and Cisco Secure Email. Nothing with sensitive inf... See more...
Good day, I am working on a Splunk project, end to end from log ingestion to creating searcheads and dashboards. I need sample logs for SalesForce and Cisco Secure Email. Nothing with sensitive information, something old I can work with. Where can I find sample logs for appliances and other applications for use in Splunk? I learnt there are some great websites out there.  I might need to use specific TA's but will deal with that later, just need to get my hands on sample logs and create a syslog server and all other components.  Thanks!
I am unable to push shcluster bundles post an upgrade to 9.0.2 from 8.2.7. I have also completed the upgrade and migrated the KVstore without error and see the following expected settings: server... See more...
I am unable to push shcluster bundles post an upgrade to 9.0.2 from 8.2.7. I have also completed the upgrade and migrated the KVstore without error and see the following expected settings: serverVersion : 4.2.17 storageEngine : wiredTiger   The error I receive is: "Error in pre-deploy check, uri=https://<HOST_NAME>/services/shcluster/captain/kvstore-upgrade/status, status=502, error=No error" If I look in splunkd.log I get the following error for each attempt. HttpClientRequest [2071959 TcpChannelThread] - Caught exception while parsing HTTP reply: Unexpected character while looking for value: '<' The error from the actual command makes me think that there was an issue with the kvstore-upgrade that is just not showing.
Salut vous allez bien j esper alors j'aimerai avoir des conseils ou des uggestion pour un projet qui porte sur la mise en place d'un NOC pour un reseaux d'operateurs merci Hi you are well i hope ... See more...
Salut vous allez bien j esper alors j'aimerai avoir des conseils ou des uggestion pour un projet qui porte sur la mise en place d'un NOC pour un reseaux d'operateurs merci Hi you are well i hope then i would like to have advice or suggestions for a project that focuses on setting up a NOC for a network of operators Thank you
Hi,  We're preparing to upgrade SE from 8 to 9 and have a question about this requirement: For distributed deployments of any kind, confirm that all machines in the indexing tier satisfy the follow... See more...
Hi,  We're preparing to upgrade SE from 8 to 9 and have a question about this requirement: For distributed deployments of any kind, confirm that all machines in the indexing tier satisfy the following conditions:  ... ... They do not run their own saved searches If our indexers are also search heads, would that violate this?
Hello Splunk Experts, Our organization has multiple applications. A work item, such as an order, passes through various applications and the actions performed on this work item are logged. Differen... See more...
Hello Splunk Experts, Our organization has multiple applications. A work item, such as an order, passes through various applications and the actions performed on this work item are logged. Different apps have different log formats. Here's what I am trying to do with my dashboard. When a user enters a work item # in the dashboard input, it will show the "journey" of that work item as it is processed by each app and passed on. I have panels on the dashboard to indicate the log entry of when it was received, processed and the passed on to the next app in the chain. Now, I am trying to get a bit more creative. In addition to the panels on the dashboard, I am planning to have a label on the dashboard with a story-template such as --- "An order with item placed by <username extracted from first or nth search result of app1> with <item # from input> arrived for processing at <time from first or nth search result of app1>. Then it was passed on to app2 at <time from first or nth search result of app 2>.  <if there is any error then> The item encountered error in app2. Error is <error extracted from search result of app2>, etc. Please contact blah blah --- So the idea here is to generate a human-readable "story", i.e. a text generated based on search results of each panel, so that someone looking at the dashboard does not have to examine multiple panels to understand what is going on. They can simply read this "story". I am able to get the resultCount using <progress> and <condition> tags in the dashboard, but do not know how to fetch and examine first or nth search result, or look for some specific text such as error or the time for nth result within the search results displayed in the panel for a particular app. Any hints or specific examples appreciated. Thanks much!
I have an access logs which prints like this server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1?sc=RT&lo=en_US HTTP/1.1" 200 350 85 which rex is  | rex field=_raw "(?<SRC>\d+\... See more...
I have an access logs which prints like this server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1?sc=RT&lo=en_US HTTP/1.1" 200 350 85 which rex is  | rex field=_raw "(?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<uri_path>\S+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+)" Is there a way to seperate uri into two or 3?  /google/page1/page1a/633243463476/googlep1?sc=RT&lo=en_US  TO  /google /page1/page1a/633243463476/googlep1?sc=RT&lo=en_US  OR /google /page1/page1a/633243463476/googlep1  ?sc=RT&lo=en_US  
Hi, I am a new Splunk user and this is my first post on the community forum.  If I am not following guidelines please let me know.  I am getting an error for the last line of my search, what is the i... See more...
Hi, I am a new Splunk user and this is my first post on the community forum.  If I am not following guidelines please let me know.  I am getting an error for the last line of my search, what is the issue?  index=web | eval hash=md5(file) | stats count by file, hash | sort - count | eval bad_hash=case((hash==7bd51c850d0aa1df0a4ad7073aeaadf7), "malicious_file")
I have the following dropdowns, now if I select test1 and then test11 in the second dropdown, when I change the first one to be test2, the second dropdown holds the old value and shows test11 althoug... See more...
I have the following dropdowns, now if I select test1 and then test11 in the second dropdown, when I change the first one to be test2, the second dropdown holds the old value and shows test11 although that's not a valid option.     <form version="1.1" theme="dark"> <fieldset> <input type="dropdown" token="test"> <label>Test</label> <choice value="*">All</choice> <choice value="test1">test1</choice> <choice value="test2">test2</choice> </input> <input type="dropdown" token="dependsontest" searchWhenChanged="false"> <label>DependsOnTest</label> <fieldForLabel>$test$</fieldForLabel> <fieldForValue>$test$</fieldForValue> <choice value="*">All</choice> <search> <query>| makeresults | fields - _time | eval test1="test11,test12",test2="test21,test22" | fields $test$ | makemv $test$ delim="," | mvexpand $test$</query> </search> </input> </fieldset> </form>     I tried the <selectFirstChoice>true</selectFirstChoice> and default values but those didn't work. How can I go about this ? EDIT: I tried using the "change" and "condition" and found out that the "dependsontest" token gets updated but the dropdown UI doesn't.
I am using Python SDK to run Splunk queries at 10 minute interval to collect data for my application. I have nearly 300 queries that I need to run every 10 mins. I have 4 FID to run these 300 queries... See more...
I am using Python SDK to run Splunk queries at 10 minute interval to collect data for my application. I have nearly 300 queries that I need to run every 10 mins. I have 4 FID to run these 300 queries, so roughly 75 queries for one FID. And I am using ProcessPoolExecutor in Python to only execute 20 at a time so there is no concurrent limit reached issue. What I am observing is I get the results sometimes and sometimes I get no data from Splunk but the connection to Splunk was successful and the query gets completed with no errors. Am I reaching any limits here?     splunkResultsReaderParameters={ "earliest_time": "-10m", "latest_time": "now" } splunkReader="ResultsReader" oneshotsearch_results = splunkService.jobs.oneshot(query, **splunkParams) reader = results.ResultsReader(oneshotsearch_results)    
i am running Squid 5.2 and having an issue adding the splunk_recommended_squid log format to my squid configuration.  Pulled the log format right out of the splunk documentation.  i'll paste it at th... See more...
i am running Squid 5.2 and having an issue adding the splunk_recommended_squid log format to my squid configuration.  Pulled the log format right out of the splunk documentation.  i'll paste it at the end of this message.  When i try and start squid with that log format, i get an error: " FATAL: Bungled /etc/squid/squid.conf line 11: logformat splunk_squid %ts.%03tu logformat=splunk_recommended_squid duration=%tr src_ip=%>a src_port=%>p dest_ip=%<a dest_port=%<p user_ident="%[ui" user="%[un" local_time=[%tl] http_method=%rm request_method_from_client=%<rm request_method_to_server=%>rm url="%ru" http_referrer="%{Referer}>h" http_user_agent="%{User-Agent}>h" status=%>Hs vendor_action=%Ss dest_status=%Sh total_time_milliseconds=%<tt http_content_type="%mt" bytes=%st bytes_in=%>st bytes_out=%<st sni="%ssl::>sni"   I haven't been able to find anything solid to help out with this.  has anyone else experienced this?   Thanks -Rob  
I have a splunk log as :     Client Map Details : {A=123, B=245, C=456}     The Map can contain more values apart from these 3, or less values, may be 0 to 10 enteries. I want to get sum of al... See more...
I have a splunk log as :     Client Map Details : {A=123, B=245, C=456}     The Map can contain more values apart from these 3, or less values, may be 0 to 10 enteries. I want to get sum of all the values of map and plot in graph, for eg, for above 123+245+456=X, then I need to plot X on graph.  I am able to get the multivalue field as:     index=temp sourcetype="xyz" "Client Map Details : " | rex field=_raw "Client Map Details \{(?<map>[A-Z_0-9= ,]+)\}" | eval temp=split(map,",")   Output from above is  A=123 B=245 C=456   Now how can I iterate over each value from temp and then split by "=" and get value of each? Or is there a better way to do this? Also how do i plot graph for this?  
Hello,  I need to generate a 1000+ records (5-10 fields) fake PII. What Best Practices, SPL, process have you designed to create via | makeresults or lookups/kvstores? Thanks and God bless, Gen... See more...
Hello,  I need to generate a 1000+ records (5-10 fields) fake PII. What Best Practices, SPL, process have you designed to create via | makeresults or lookups/kvstores? Thanks and God bless, Genesius   
I am trying to create a derived metric using |Custom Metrics|Linux Monitor|cpu|CPU (Cores) Logical.   I keep getting the error: WARN NumberUtils-Linux Monitor - Unable to validate the value as a val... See more...
I am trying to create a derived metric using |Custom Metrics|Linux Monitor|cpu|CPU (Cores) Logical.   I keep getting the error: WARN NumberUtils-Linux Monitor - Unable to validate the value as a valid number java.lang.NumberFormatException: For input string: "Cores" Anyone know how to use that metric in a derived metric?  I believe it is trying to use "Cores" as its own metric.  I tried using escape characters with no luck.