All Topics

Top

All Topics

I have CSV File with delimiter "|" like sample below for new ingestion. I wanted to use standard sourcetype csv. But standard:csv is having "," as delimiter. Can we add more than one delimiter in FIE... See more...
I have CSV File with delimiter "|" like sample below for new ingestion. I wanted to use standard sourcetype csv. But standard:csv is having "," as delimiter. Can we add more than one delimiter in FIELD_DELIMITER attribute? If so how can do that. Or for the below one do we need to define other sourcetype with separate individual delimiter I tried this FIELD_DELIMITER=",||" but not working name|age|class john|9|first bob|10|second Could you please help 
Hi there, I have managed to enter other fields in Chart title, but I fail to get the selected time period visible with a dynamic variable in the title of a Chart. So, I can do this chart title:  ... See more...
Hi there, I have managed to enter other fields in Chart title, but I fail to get the selected time period visible with a dynamic variable in the title of a Chart. So, I can do this chart title:  Time chart of Users and Journal notes ($care_unit_tok$) Which after Care Unit selection = ASIH Botkyrka results in this title:   Antal användare och journalanteckningar (| where (care_unit = "ASIH Botkyrka")) But I fail to do similar action with time_tok, e.g. like this:  Time chart of Users and Journal notes ($time_tok$) But even if I have made a time selection which I can see in the filter as Jul 2023, I only get this text over the Chart, without any time period displayed:  Time chart of Users and Journal notes Thanks for any ideas on how to solve this.   
We had a problem that certain fields weren't searchable.  index=foo bar=* did not show any result even though interesting fields and the events themselves showed that the fields were extracted as ex... See more...
We had a problem that certain fields weren't searchable.  index=foo bar=* did not show any result even though interesting fields and the events themselves showed that the fields were extracted as expected. Searching specific values also didn't help. Some fields did return results but only if you added wildcards:   index=foo bar="*value"   It was possible to find the events when using:   | where isnontull(foo)    OR    | eval check = if(foo="bar",1,0) | where check = 1   but any other method did not return results. After some research in the community I found Posts with similar problems. Some suggested that there could be hidden fields present. Most of the time the suggestion was to enable indexed extractions with the fields.conf and adding a wildcard there.   [MyField] INDEXED_VALUE = *<VALUE>   While trying out whether this option would work for me, I compared existing fields.conf files in the environment and noticed that all of the affected missing field values are part of the fields.conf of another app. By default fields.conf is global if im not mistaken hence the fields defined in fields.conf as indexed extraction affected other sourcetypes but since the fields in those other apps did not include the necessary parameters to successfully extract at index time this error occurred.  Setting export to none in the metadata of the app that holds the fields.conf file solved my problem. local/default.meta:   [fields] export = none   As seen in the fields.conf Docs: * Do not create indexed fields with names that collide with names of fields that are extracted at search time. I was aware of this setting regarding creating new fields myself but forgot to check whether any of the installed apps already do so.  Hope this helps anyone that may be in a similar situation! TLDR: If you have trouble with extracted fields and cant search them even though they are shown inside the events and interesting fields, check your environment for fields.conf and if the affected fields are defined as indexed extraction. If so limit the fields.conf with metadata to the original app.
The slack channel mentioned here: https://hub.docker.com/r/splunk/splunk   is private, I'd like to join it.
Did the blacklist/whitelist got replaced by denylist/allowlist in Splunk 9? In some Blogs i read that Splunk 9 replaced blacklist with denylist? Or is blacklist still usable? In the Changelogs ... See more...
Did the blacklist/whitelist got replaced by denylist/allowlist in Splunk 9? In some Blogs i read that Splunk 9 replaced blacklist with denylist? Or is blacklist still usable? In the Changelogs of Splunk 9 i didnt found any evidence for the change, but the Splexicon and some Blogs say something different. https://docs.splunk.com/Splexicon:Denylist https://www.splunk.com/en_us/blog/leadership/biased-language-has-no-place-in-tech-a-follow-up.html?locale=en_us Thanks for explanation
App installs, the readme is there and the in browser CyberChef.html works just fine. However, when I try to execute SPL using cyberchef like the example from the documentation:   | makeresults cou... See more...
App installs, the readme is there and the in browser CyberChef.html works just fine. However, when I try to execute SPL using cyberchef like the example from the documentation:   | makeresults count=3 | streamstats count | eval data=random()/random() | cyberchef infield='data' outfield='convertedData' operation="ToBase64" | table data convertedData   I get an error output from Splunk   Error in 'cyberchef' command: CyberChef chef.bake is not a function   I'm trying to get this running on a single test instance in a VM, all file persmissions looks right.. Am I missing something, or is this a symptom of me trying to test the app using the "free" version of Splunk (I seem to remember enterprise being needed)? Any suggestions and/or feedback appreciated!
Hello, I have deployed an app to a distributed Search Head Cluster. This app contains only a props.conf file in the default directory. In the props.conf there is a calculated field writing a vendor-... See more...
Hello, I have deployed an app to a distributed Search Head Cluster. This app contains only a props.conf file in the default directory. In the props.conf there is a calculated field writing a vendor-product to a specific source type. In the search app this field is not populated. I believe because the sharing permission is set to app, rather than to global. how can I change the permission from app to global? Initially I created this app on the Search Head deployer in the directory shcluster/apps by adding a directory TEST, there i created a directory default and there i created a file props.conf. Then I deployed it to the search heads. I see the app on all the search heads, it's just the wrong sharing permission. thanks Alex  
Hi Splunkers. I've been trying for weeks to do the following: I have a search that outputs a table with MITRE techniques as shown below: Query index=notable search_name="Endpoint - KTH*" | field... See more...
Hi Splunkers. I've been trying for weeks to do the following: I have a search that outputs a table with MITRE techniques as shown below: Query index=notable search_name="Endpoint - KTH*" | fields tactic_mitre, technique_mitre, risk_mitre, src_user, user, Computer, dest, search_name | stats count by technique_mitre |eval codes_tech=technique_mitre |makemv delim=", " codes_tech |mvexpand codes_tech |rename count as carry |stats sum(carry) as Total by codes_tech Result And I have another query that brings me a table with the IDs and names of the Techniques and tactics of MITRE Query | inputlookup mitre_lookup | foreach TA00* [| lookup mitre_tt_lookup technique_id as <<FIELD>> OUTPUT technique_name as <<FIELD>>_technique_name | eval <<FIELD>>_technique_name=mvindex(<<FIELD>>_technique_name, 0) | eval <<FIELD>>=<<FIELD>>_technique_name . " - " . <<FIELD>>] | fields TA0043,TA0001, TA0002, TA0003, TA0004, TA0005, TA0006, TA0007, TA0008, TA0009, TA0011, TA0010, TA0040, TA0042 | rename TA0043 as "Reconnaissance", TA0042 as "Resource Development", TA0001 as "Initial Access", TA0002 as "Execution", TA0003 as "Persistence", TA0004 as "Privilege Escalation", TA0005 as "Defense Evasion", TA0006 as "Credential Access", TA0007 as "Discovery", TA0008 as "Lateral Movement", TA0009 as "Collection", TA0011 as "Command and Control", TA0010 as "Exfiltration", TA0040 as "Impact" Result   I would like to search within the MITRE table for the codes_tech of the first query and if Total is greater than 0 I would put the Total and otherwise leave the other IDs at 0 Please, I really need your help, please, please, please...
Hi, I am trying to create a schema in Analytics using postman. But i am getting the below error. Error:   "Error provisioning account on any cluster"   I am following the steps as per the D... See more...
Hi, I am trying to create a schema in Analytics using postman. But i am getting the below error. Error:   "Error provisioning account on any cluster"   I am following the steps as per the Docs :   POST http://analytics.api.example.com/events/schema/{schemaName} HTTP/1.1 X-Events-API-AccountName:<global_account_name> X-Events-API-Key:<api_key> Content-Type: application/vnd.appd.events+json;v=2 Accept: application/vnd.appd.events+json;v=2 { "schema" : { "account": "integer", "amount": "float", "product": "string" } } And we are using the Analytics API URL as below: as our controller is from North America region. https://analytics.api.appdynamics.com
I have a "Severity Level" field in both index A and index B. Their structure is like:     ==index A=== Severity Level 1 2 3 4 ===index B=== Severity Level critical high medium low      Now ... See more...
I have a "Severity Level" field in both index A and index B. Their structure is like:     ==index A=== Severity Level 1 2 3 4 ===index B=== Severity Level critical high medium low      Now I want to combine the two indexes in a search and display the Severity Level using      | timechart count by "Severity Level"     where the combined "Severity Level" values only contain 1,2,3,4 So, I need an eval = case() to map them. My syntax for for that is      eval "Severity Level" = case('Severity Level' == "critical", 1 ,'Severity Level' == "high", 2, 'Severity Level' == "medium", 3, 'Severity Level' == "low", 4, 'Severity Level' == 1, 1, 'Severity Level' == 2, 2, 'Severity Level' == 3, 3, 'Severity Level' == 4, 4, 1=1, null)      By this, the result gives incorrect result, i.e., only showing incorrect counts on 4. I think the problem is in the single and double quote, but I am not sure which is which. It is a bit urgent so I need help. Thanks. 
I have Snort shoving JSON logs to Splunk, I see everything just dandy when I do the following in search:     sourcetype="snort3:alert:json"     I see many events! However, I can have to crea... See more...
I have Snort shoving JSON logs to Splunk, I see everything just dandy when I do the following in search:     sourcetype="snort3:alert:json"     I see many events! However, I can have to create alerts based on suspicious activities that I already search for in Splunk. Is there some way to load pre-set common alerts into Splunk?  Do I have to go through the task of making each alert for Splunk (eg: create alert for bad RDP logins, lengthy ICMP probes, bad FTP logins, etc.. etc..?)  I guess what I'm asking is there a way to make my Splunk populate alert rules that are predefined or are zero day?
Hello Splunk Community,  I'm trying to extract fields from the cloudwatch events like 1)region 2)arn 3) startTime 4) endTime 5)eventTypeCode 6)latestDescription from an event. The regex works fine ... See more...
Hello Splunk Community,  I'm trying to extract fields from the cloudwatch events like 1)region 2)arn 3) startTime 4) endTime 5)eventTypeCode 6)latestDescription from an event. The regex works fine in regex101 however it's not extracting all field values in Splunk For ex: | rex field=_raw "region":\s(?P<_region>"\w+-\w+-\d)" the above rex is only extracting us-east-1 region only where I have multiple regions in the data. Please help to extract the field I mentioned/highlighted.  sample event: 2020-02-10T17:42:41.088Z 775ab4c6-ccc3-600b-9c84-124320628f00 {"records": [{"value": {"successfulSetoflog": [{"awsAccountId": "123456789123", "event": {"arn": "arn:aws:health:us-east-........................................................
Hello Community, I have tried searching, but I've not find an answer to my specifics needs... Or I dont know how to word my question. I work in a company that manufactures servers. Each server ma... See more...
Hello Community, I have tried searching, but I've not find an answer to my specifics needs... Or I dont know how to word my question. I work in a company that manufactures servers. Each server manufactured creates a logfile with a unique name. The log file is a txt file that has identifying lines like "Serial Number: FMXXXXXXX", "Station: 1", "Start Tme: 12:00:00", etc. I am trying to configure splunk to search all these log files based on serial number (to start with) and eventually create a searchable dashboard where I can lookup log files based on serial numbers. I'm obviously new to splunk, and have watched a lot of tutuorials, but most tutorials focus an searching one big log file, or several log files. so far, i have setup the splunk UI and pointed it to a directory containing my log files.  Under "data summary" my sources are over 100k and sourcetypes are over 14k. any hep would be appreciated. Kevin
A couple of weeks ago I took a screenshot of the "Save As Alert" window. Alert type Real-time was available. Today, my colleagues discovered that we can no longer do so, the only alert type is "Sche... See more...
A couple of weeks ago I took a screenshot of the "Save As Alert" window. Alert type Real-time was available. Today, my colleagues discovered that we can no longer do so, the only alert type is "Scheduled". Our admin role has the schedule_rtsearch capability set, but we are not able to find the "rtsearch" capability mentioned in the documentation as also required. Has the latest Splunk Cloud version upgrade to 9.0.2305 removed that capability? Or am I missing something?  
Hi, I created an account to use the free version for 2 weeks and learn Splunk. I received the email to activate my account but I didn't receive the email with the URL and login credentials. I don't k... See more...
Hi, I created an account to use the free version for 2 weeks and learn Splunk. I received the email to activate my account but I didn't receive the email with the URL and login credentials. I don't know why, I've been waiting for more than an hour it didn't work. I thought that Splunk rejected Gmail so I created another one with Yahoo! but I still haven't received any login or URL credentials.    Please help. Thank you.
Hi guys I need your help, please I'm trying to make a connection between MongoDB and Splunk using the DBConnect app, but it's not working. I have been working with this: https://unityjdbc.com/mong... See more...
Hi guys I need your help, please I'm trying to make a connection between MongoDB and Splunk using the DBConnect app, but it's not working. I have been working with this: https://unityjdbc.com/mongojdbc/setup/mongodb_jdbc_splunk_dbconnect_v3.pdf Are any of you guys currently working with this connection between MongoDB and Splunk that can help me please. Regards.
Apologies if this has been asked before.  Essentially, I have a Single-value visualization in a dashboard that just displays a number. This dashboard was built in Dashboard Studio, and the search ... See more...
Apologies if this has been asked before.  Essentially, I have a Single-value visualization in a dashboard that just displays a number. This dashboard was built in Dashboard Studio, and the search tied to the visualization is a simple: Index="my_index" | stats count What I want to accomplish is that when people open this search from the visualization, instead of displaying the | stats count, it will display a table of the data. If I add a table command at the end, I lose the stats for the visualization. Is there any way to have both a table and stats from one search? Or is there a way to have a different search open when somebody clicks this visualization?
Hello,  We have company iphones, managed with Jamf Now, that we would like to collect logs on.  Specifically, we would be most interested in the login attempts. Is there a way for us to collect t... See more...
Hello,  We have company iphones, managed with Jamf Now, that we would like to collect logs on.  Specifically, we would be most interested in the login attempts. Is there a way for us to collect those logs and send them to our Splunk Cloud environment? Thank you!
PS: I checked the other similar questions in this forum but my question is not answered. Hence posting again. Good afternoon all, When I try to fetch my events using a panel in a dashboard, the ... See more...
PS: I checked the other similar questions in this forum but my question is not answered. Hence posting again. Good afternoon all, When I try to fetch my events using a panel in a dashboard, the results do come up but they are way small compared to the exact same query that I use in search window. The purpose is to keep monitoring failure spikes happening on the production environment. I'm providing the normal search query as well as the entire dashboard query (scrubbed) for your review. For sure the normal search is returning more values than the exact query as run from Panel. Normal search query:  index=myOnlyIndex | fields * | search clientApp="Fictitious Company Mobile App*" Customer=Production app IN (https) requestMethod=POST | | timechart span=1h count BY failureCode usenull=false useother=false Dashboard details: <form version="1.1" theme="light"> <label>Production Spikes.</label> <description>This is dashboard contains various panels that will be tested before eventually moved into functional dashboard</description> <search id="BaseSearch"> <query>index=myOnlyIndex | fields * </query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <fieldset submitButton="true" autoRun="false"> <input type="time" token="time_token" searchWhenChanged="true"> <label>Timeframe</label> <default> <earliest>@d</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="environment" searchWhenChanged="true"> <label>Environment</label> <prefix>Customer=</prefix> <suffix></suffix> <initialValue>*</initialValue> <choice value="*">All</choice> <choice value="UAT">UAT</choice> <choice value="Production">Production</choice> </input> <input type="dropdown" token="protocol" searchWhenChanged="true"> <label>Protocol</label> <prefix>app IN (</prefix> <suffix>)</suffix> <choice value="http, https">Both</choice> <choice value="https">HTTPS Only</choice> <choice value="http">HTTP Only</choice> </input> <input type="dropdown" token="reqMethod" searchWhenChanged="true"> <label>Request Method</label> <prefix>requestMethod=</prefix> <suffix></suffix> <initialValue>*</initialValue> <choice value="*">All</choice> <choice value="GET">GET Method</choice> <choice value="POST">POST Method</choice> <choice value="PUT">PUT Method</choice> </input> </fieldset> <row> <panel> <title>Mobile Application Generic Spikes</title> <chart> <search base="BaseSearch"> <query>search clientApp="Fictitious Company Mobile App*" $environment$ $protocol$ $reqMethod$ | timechart span=1h count BY failureCode usenull=false useother=false </query> </search> </chart> </panel> </row> </form>
Hello All, I have created a filler gauge for a count of events. I would like to not see the scale on the right side of the gauge, or change the major units to show just 100s. Is this possible -... See more...
Hello All, I have created a filler gauge for a count of events. I would like to not see the scale on the right side of the gauge, or change the major units to show just 100s. Is this possible - see image below: I have searched to see if I can change this, but have not yet found the answer, thanks as always, eholz1