All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi I use a dashboard with 17 panels (12 single panels and 5 table panels) that works in real-time In this case, real time means that I can use scheduled serarch because I need to have the last even... See more...
hi I use a dashboard with 17 panels (12 single panels and 5 table panels) that works in real-time In this case, real time means that I can use scheduled serarch because I need to have the last events every time I launch my dashboard By default, my timepicker is on the last 24 hours The index is always the same but I use 10 different sourcetype I must imperatively use real time Most of the time I use post process search in order to avoid to query the index and the sourcetype many times The problem I have is a slow display, sometimes it works almost fine and most of the time I have a message "waiting for data" or "waiting for queued job to start" I also think that since 2 days there is slowness issues behind indexers because I have tested other dashboards and they are slow too.. What are best practices for dashboard in real time please?
Hi Spelunker's!. I'm receiving this message from Splunk  "Received event for unconfigured/disabled/deleted index=threathunting with source="source::[T1015] Accessibility Features" host="host::h... See more...
Hi Spelunker's!. I'm receiving this message from Splunk  "Received event for unconfigured/disabled/deleted index=threathunting with source="source::[T1015] Accessibility Features" host="host::hdc-sec01-siem001" sourcetype="sourcetype::stash". So far received events from 1 missing index(es)" Kindly advice.  Thank you.
We use Palo Alto, Barracuda, and McAfee WGs. All perform some form of Web Filtering / Blocking, which I'm now being asked to produce a report on,  Top 50 blocked categories. The SPL looks something... See more...
We use Palo Alto, Barracuda, and McAfee WGs. All perform some form of Web Filtering / Blocking, which I'm now being asked to produce a report on,  Top 50 blocked categories. The SPL looks something like  Index IN (Palo, Barra, MCWG) vendor_action="Blocked-URL"  earliest=-8d@d latest=-1d@d | top limit=50 category | stats count by category. The problem is - I need to filter out links to a site (for instance type Betfred into google and I get two blocks although the human never actually went to Betfred.  I've also got the dilemma of multiple images being called from a web page each being blocked. So - how do you interpret weblogs to only be unique calls by a human being to a website, rather than google lookups or multiple returns whilst visiting another site.  I've tried using dedup against user and URL, but that removes repeat attempts throughout the week along with all the image download requests, it's not very accurate or scientific. There has to be a way to work out that the web request is a link click or a URL entry rather than a page lookup, but I'm at a loss.  
I'm trying to do a line graph using this command: source="filename.csv" sourcetype="csv" | stats sum(intake), values(gender) by academic_year Output:   However, I want the total intake to sho... See more...
I'm trying to do a line graph using this command: source="filename.csv" sourcetype="csv" | stats sum(intake), values(gender) by academic_year Output:   However, I want the total intake to show the total for each gender, male and female so that my line graph will look something like this:    Thank you for the help!    
We have a customer who has Splunk as main Security platform, but now they are trying to onboard other datasets for forensic/compliance/data retention purposes/application data. This doesn't need to b... See more...
We have a customer who has Splunk as main Security platform, but now they are trying to onboard other datasets for forensic/compliance/data retention purposes/application data. This doesn't need to be in Splunk as such, but any searchable tools like OpenSearch or similar. Before looking into such extra tools, wanted to understand if there is any provision with Splunk which would allow a data ingestion at cheaper cost (not counting to the main license cost or a cheaper license option?) So the scenario is (Security + compliance + application data) => Splunk Heavy Forwarder -> (A) Security data to Splunk  &&  (B) Rest of data to a log retention service Before going into this avenue, wanted to check if Splunk provide such a cheaper license option? i.e. for a log retention mode or non-important data (In future, they may have funding to move into Splunk, but not for atleast 6-8 months)  
Hello Community, I have a lookup file policy_search.csv that has search criteria to find specific policy events in my data.  The file looks like this: #, policy, search_criteria 1, policyA,  (poli... See more...
Hello Community, I have a lookup file policy_search.csv that has search criteria to find specific policy events in my data.  The file looks like this: #, policy, search_criteria 1, policyA,  (policy="policyA") OR 2, policyB,  (policy="policyB" AND (protocol="X" OR protocol="Y")) OR 3, policyC, (policy="policyC" AND channel="ch1") OR   I want to produce a search like the one below, but using the criteria in the lookup: index=events | search       (policy="policyA") OR      (policy="policyB" AND (protocol="X" OR protocol="Y")) OR      (policy="policyC" AND channel="ch1") | table incident policy protocol channel  How could I do that? the idea is to maintain the search criteria in the lookup file and have changes reflected automatically in our reports. I'm looking for something like index=events | search [| inputlookup policy_search.csv | stats values(search_criteria)] | table incident policy protocol channel   I really appreciate any help.  Thank you very much! Adan Castaneda
hi if tere is no results retourned I need to display 0 in my single panel and the unit whic is "sec" So I need to display "0 sec" and the formatting options even if there is no results how to do t... See more...
hi if tere is no results retourned I need to display 0 in my single panel and the unit whic is "sec" So I need to display "0 sec" and the formatting options even if there is no results how to do this please? <single> <title>Bur</title> <search base="hang"> <query>| stats perc90(hang_duration_sec) as hang_duration_sec </query> </search> <option name="drilldown">none</option> <option name="height">85</option> <option name="numberPrecision">0.0</option> <option name="rangeColors">["0x53a051","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="rangeValues">[0,5,10]</option> <option name="refresh.display">progressbar</option> <option name="unit">sec</option> <option name="useColors">1</option> </single>
hi the search below returns results     index=tutu sourcetype=toto runq | search NOT runq=0.0 | table runq host | join host [ search index=tutu sourcetype=toto | fields type host cpu_... See more...
hi the search below returns results     index=tutu sourcetype=toto runq | search NOT runq=0.0 | table runq host | join host [ search index=tutu sourcetype=toto | fields type host cpu_core) | stats max(cpu_core) as nbcore by host ] | eval Vel = (runq / nbcore) / 6      but when I add      table vel     or     | stats avg(Vel) as Vel     at the end of the search, there is no results what is wrong please?
Should a non authenticated user access this endpoint (POST request) https://localhost:8089/services/template/realize and create templates , and if no what can the security impact of this
Installed DB connect pluging, when i try to use  the plugin it tried to open up but in the background the splunk app is cracshing.   i1) i tried resinstalling previous version 3.6  its same issue ... See more...
Installed DB connect pluging, when i try to use  the plugin it tried to open up but in the background the splunk app is cracshing.   i1) i tried resinstalling previous version 3.6  its same issue 2) re installed slunk ( same issue)    looks like no work around ? is it because i am using trial version ?   please help.   A
I'm using curl in Spluk to download some data from an API and to build a lookup of the downloaded data. The data comes back as a single field value (curl_message). The first line is the effect field ... See more...
I'm using curl in Spluk to download some data from an API and to build a lookup of the downloaded data. The data comes back as a single field value (curl_message). The first line is the effect field list for the lookup I am going to create and then there is data, of which one field MAY be multi-line. So in this example   iNote iWine Type iUser Vintage Wine SortWine Locale Producer Varietal MasterVarietal Designation Vineyard Country Region SubRegion Appellation TastingDate Defective fAllowComments Views Name fHelpful fFavorite Rating EventLocation EventTitle iEvent EventDate EventEndDate TastingNotes fLikeIt CNotes CScore LikeVotes LikePercent Votes Comments cLabels 9537078 1519682 Red 94404 2012 Wolf Blass Black Label Wolf Blass Black Label Australia, South Australia Wolf Blass Cabernet-Shiraz Blend Red Blend Black Label Unknown Australia South Australia Unknown Unknown 11/27/2021 False True 17 username 0 93 True 4 92 2 1 0 0 237 9537066 2452851 White 94404 2014 Xanadu Chardonnay Reserve Margaret River Xanadu Chardonnay Reserve Margaret River Australia, Western Australia, South West Australia, Margaret River Xanadu Chardonnay Chardonnay Reserve Unknown Australia Western Australia South West Australia Margaret River 11/3/2021 False True 23 username 0 95 Seems to be improving. A perfect accompaniment to prosciutto True 6 92.8333333333333 6 1 0 0 35 9516281 2778467 White 94404 2016 Weingut Thörle Saulheimer Hölle Riesling trocken Thörle, Weingut Saulheimer Hölle Riesling trocken Germany, Rheinhessen Weingut Thörle Riesling Riesling trocken Saulheimer Hölle Germany Rheinhessen Unknown Unknown 11/28/2021 False True 135 username 0 93 Paired well with Barramundi and sweet potato fries Colour: Pale gold Nose: Medium P: Lemon rind, peach, orange S: cream T: honey Palate: Dry, high acidity, medium alcohol, full bodied, pronounced flavour, medium finish P: pear, peach, lemon S: bread T: nutmeg, caramel B: 1 L: .5 I: .5 C: 1 Very good wine True 5 91 3 1 0 0 45 9431031 3300231 Red 94404 2017 Girolamo Russo Etna 'a Rina Girolamo Russo Etna 'a Rina Italy, Sicily, Etna DOC Girolamo Russo Nerello Blend Nerello Mascalese 'a Rina Unknown Italy Sicily Unknown Etna DOC 10/2/2021 False True 65 username 0 93 True 50 90.2888888888889 28 0.964285714285714 0 0 378 9431030 3580970 Red 94404 2019 Swinging Bridge Shiraz William J. Swinging Bridge Shiraz William J. Australia, New South Wales, Central Ranges, Orange Swinging Bridge Shiraz Syrah William J. Unknown Australia New South Wales Central Ranges Orange 9/11/2021 False True 0 username 0 92 True 1 92 1 1 0 0 10 Primary Black cherry, liquorice, dried herbs, black pepper, black olive, blackberry 9431025 3157557 Red - Sparkling 94404 2008 Seppelt Shiraz Show Sparkling Great Western Seppelt Shiraz Show Sparkling Great Western Australia, Victoria, Western Victoria, Great Western Seppelt Shiraz Syrah Show Sparkling Unknown Australia Victoria Western Victoria Great Western 10/15/2021 False True 252 username 0 95 Deep ruby, pronounced nose, lots of jammy red and black fruits, bubbles washing inside the mouth, filling the mouth with flavour. The finish lingering forever. True 5 93.25 3 1 0 0 28   there is the header (starting iNote...) and then 6 data 'rows' that need to be expanded. I have used   | makemv tokenizer="(.*)\n" curl_message   but when the 'TastingNotes' field is multi-line, as in the line starting 9516281, then of course that fails to extract the multi-line value. In that case, the tasting note should be from (Paired well... to Very good wine) I have tried playing with rex and max_match=0. I know that a valid line starts with a (currently) 7 digit number, so I know I will never have that in the tasting note text, but I can't figure out what the correct regex might be. I got as far as    (?s)((iNote|^\d{7}).*?)^\d{7}   but don't know how to exclude the end match part, which is the start of the next entry and in any case that doesn't work as the tokenizer regex I don't want the data to go to an index, so I could write a scripted input that get the data and uses sed/awk to break out the events then REST API to create the lookup, but that seems like overkill.  
We hare using dashboard studio to create multiple dashboards for summaries of our security data ,  We have almost tried all way out to see if font size of the values  in different visualizations   ca... See more...
We hare using dashboard studio to create multiple dashboards for summaries of our security data ,  We have almost tried all way out to see if font size of the values  in different visualizations   can be changed.   The values for different dashboards in studio are very small and there is no option available to increase the font size for better representation of these dashboards for management overview.   Any one tried new dashboard studio extensively and found any such issues fixes ??
Hi, Splunkers, I have some skill expression as below: Orange > 5 & apple < 0  & ( Peach = 0 | Tomato >) &  (Strawberry =7) this skill expression covers all possible combinations. How  to develop ... See more...
Hi, Splunkers, I have some skill expression as below: Orange > 5 & apple < 0  & ( Peach = 0 | Tomato >) &  (Strawberry =7) this skill expression covers all possible combinations. How  to develop a Regex  to find any invalid string in this expression?  Btw,   extra space between different strings, or symbol is ok here. for example,  like here,  after apple, there is  double 0 with space,  there is space between tomato,   and there is a missing right bracket for Strawberry =7, etc.  Orange > 5 & apple < 0 0  & ( Peach = 0 | To mato >) &  (Strawberry =7   thanks in advance. Kevin
I'm unable to get Splunk to run in docker using a newer MBP with an M1 Max chip on a fresh install of Monterey, as well as a fresh install on an M1 Mac mini. I've played with as many settings as I co... See more...
I'm unable to get Splunk to run in docker using a newer MBP with an M1 Max chip on a fresh install of Monterey, as well as a fresh install on an M1 Mac mini. I've played with as many settings as I could think, but can't seem to find an error that indicates what's really going on. As far as I can tell, splunkd starts and binds to port 8089, but Splunk Web fails to bind to port 8000, despite the port being available.  Things I tried and some thoughts:  - My initial thought is that port 8000 was being used by something else, so I tried many other ports with no success. Though, I had no evidence of this (using netstat) - I then thought that maybe there was a firewall entry not being added correctly so I checked iptables, it doesn't exist. I then checked firewalld, also doesn't appear to exist. so no firewall?  - I had a friend take my exact docker compose file and install the everything on an older, non-up-to-date MacBook Air running on an intel chip. That worked... - I also tried adjusting the timeout values listed in the sensible vars list, that didn't seem to work. Where am I supposed to mount the docker.yaml file to? Where I mounted it didn't work. the var, SPLUNK_CONNECTION_TIMEOUT, added directly to the compose file didn't make a difference either - I even tried starting Splunk with debug mode and saw nothing helpful there. The actual output, noticing the time taken under Start Splunk via CLI and failed=1: sh1  | PLAY RECAP ********************************************************************* sh1  | localhost                  : ok=51   changed=7    unreachable=0    failed=1    skipped=48   rescued=0    ignored=0    sh1  |  sh1  | Friday 14 January 2022  21:48:52 +0000 (0:04:22.382)       0:05:46.139 ********  sh1  | ===============================================================================  sh1  | splunk_common : Start Splunk via CLI ---------------------------------- 262.38s sh1  | splunk_common : Get Splunk status --------------------------------------- 8.02s sh1  | splunk_common : Update Splunk directory owner --------------------------- 6.01s sh1  | Gathering Facts --------------------------------------------------------- 5.73s sh1  | splunk_common : Generate user-seed.conf (Linux) ------------------------- 4.70s sh1  | splunk_common : Cleanup Splunk runtime files ---------------------------- 4.30s sh1  | splunk_common : Update /opt/splunk/etc ---------------------------------- 3.87s sh1  | splunk_common : Check for scloud ---------------------------------------- 3.00s sh1  | splunk_common : Hash the password --------------------------------------- 2.77s sh1  | splunk_common : Find manifests ------------------------------------------ 2.52s sh1  | splunk_common : Remove input SSL settings ------------------------------- 2.22s sh1  | splunk_common : Check for existing installation ------------------------- 2.21s sh1  | splunk_common : Create .ui_login ---------------------------------------- 2.21s sh1  | splunk_common : Check if /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key exists --- 2.19s sh1  | splunk_common : Enable splunktcp input ---------------------------------- 2.18s sh1  | splunk_common : Enable Splunkd SSL -------------------------------------- 2.18s sh1  | splunk_common : Enable Web SSL ------------------------------------------ 2.18s sh1  | splunk_common : Trigger restart ----------------------------------------- 2.17s sh1  | splunk_common : Remove splunktcp-ssl input ------------------------------ 2.16s sh1  | splunk_common : Set Splunkd Connection Timeout -------------------------- 2.16s sh1 exited with code 2 Here's the docker-compose  (worked on the older Mac): version: "3.9" services:   sh1:     platform: linux/amd64     image: splunk/splunk:latest     container_name: sh1     environment:       - SPLUNK_START_ARGS=--accept-license       - SPLUNK_PASSWORD=Passw0rd!       - SPLUNK_ROLE=splunk_search_head       - SPLUNK_HTTP_PORT=8000       - SPLUNK_CONNECTION_TIMEOUT=300    ports:      - 8000:8000      - 8089:8089   Any thoughts? Can someone on OS12.1 with an M1 chip get this to work? Additionally, can someone running OS12.1 with an intel chip validate that this works? Maybe the issue is with the M1 chip, not the OS version. Or maybe it's just an issue with 12.1.    Edit:  I now have evidence that the the compose file I posted works on an intel based Mac running the 12.1. Therefore, I think it's safe to say the issue is one of compatibility between the Splunk-Docker image and the M1 Mac.
What is the character limit of an alert name in splunk ES?
Hey guys I'm trying to create a dashboard that shows any host with a group of specified hosts that are not returning data from a specific source type So what I have been trying so far to no succes... See more...
Hey guys I'm trying to create a dashboard that shows any host with a group of specified hosts that are not returning data from a specific source type So what I have been trying so far to no success is  Index=xyz Host=abc  Sourcetype=def  |  timechart span=30min count by host Where count < 1 usenull=f useother=f This won't show anything because it going to have no events to report but I'm not sure how I can create a variable base upon have no results back within a specific time then do a timechart base upon the new variable by host Unless I'm going about this completely wrong lol please help 
I want to access the title, owner, etc., of the currently running scheduled alert via SPL syntax.  I want to append this information to a lookup table so that a variety of alerts or saved searches wi... See more...
I want to access the title, owner, etc., of the currently running scheduled alert via SPL syntax.  I want to append this information to a lookup table so that a variety of alerts or saved searches with a variety of formatted outputs can be centralized into one output location.  I understand that variables like $job.title$ and $results.count$ are available went setting up these jobs via the Alert or Scheduler UIs, but how do I access these environment variables via SPL?  Should I use the '| rest ....' command? Which REST APIs will have the job title/owner info? Here is what I want to produce: | <query for outlier info by hostname> | <summarize data by hostname> | <rest commands to extract job title/owner/etc info> | table timestamp, hostname, error_count, <multi-value fields with alert specific data>, $job.title$, $job.owner$ | outputlookup append=t ..... <tracking_history_table.csv>  
I am trying to install PIP on the splunk provided python - python3.7.  From what I can see, python is located on /opt/splunk/bin directory, so I am using the command: python3.7 get-pip.py --user af... See more...
I am trying to install PIP on the splunk provided python - python3.7.  From what I can see, python is located on /opt/splunk/bin directory, so I am using the command: python3.7 get-pip.py --user after getting in that directory. However, I am getting errors, such as: ModuleNotFoundError: No module named 'distutils.command'   Is there already pip installed here? How    What do I do? As far as I know pip usually comes with python3 but I do not see it here. Without pip, it is extremely difficult to install any package so I am in a loop.   
| rex field=Uptime "(?<Uptime_Days>^([^d]+))" | eval Uptime_Years=(Uptime_Days/365) | dedup Host_Name | eval Description=(case(Uptime_Years>=7, "Over 7 Years", Uptime_Years<7 AND Uptime_Years>3,"3... See more...
| rex field=Uptime "(?<Uptime_Days>^([^d]+))" | eval Uptime_Years=(Uptime_Days/365) | dedup Host_Name | eval Description=(case(Uptime_Years>=7, "Over 7 Years", Uptime_Years<7 AND Uptime_Years>3,"3 to 7 Years", Uptime_Years<3, "Less than 3 Years" )) | rename Description as "Uptime_Category" | stats count(Host_Name) as total by Uptime_Category | eventstats sum(total) as grand_total | eval percentage = round((total/grand_total)*100,1) | table Uptime_Category percentage | eval Description=(case(Uptime_Category="Over 7 Years" AND percentage>="10%","Poor",  Uptime_Category="3 to 7 Years" AND percentage>="20%","Needs Attention", Uptime_Category="Less than 3 Years" AND percentage>="80%","Good" )) | rename Description as "Health Status" | stats count by "Health Status" Need help with Color Coding Poor to "Red", Needs Attention to "Yellow", & Good to "Green" @niketn 
Hi I am working on query to retrieve count of repeated, unique and total visits by user through different channels. The user can access my application through different channels like Email,SMS and A... See more...
Hi I am working on query to retrieve count of repeated, unique and total visits by user through different channels. The user can access my application through different channels like Email,SMS and Apps.  For every channel count and output the number of new users (only one event), repeated users (more than one event) and final totals (=new + repeated). The log data is in JSON format and there are two main fields relevant to achieve results.  First cust_id (which is a unique customer id) and second filed is channel_type. Example expected results output: channel_type      repeated_customers    new_customers             total -----------------       ---------------------------      ---------------------       ---------------- Apps                                       4                                            1                                    5 Email                                      2                                            2                                    4 SMS                                        1                                            5                                   6   So far I have developed the below query which is not giving expected result. index=cust_app sourcetype=cust_rec | search log="*Cus Responeded*" | rex field=log "(?<applog>{(?:[^}{]+|(?R))*+})" | spath input=applog output=channel_type path=channel_type | spath input=applog output=cust_id path=cust_id | stats count by channel_type cust_id How to get the expected results from the given filed values in the data.  Thanks in advance. @niketn @elliotproebstel @twinspop