All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

indexer is on prem. I am trying to use splunk to collect logs on from different pods managed in K8s. I tried Splunk App for Infrastructure, but it seems only collects metrics etc from different ... See more...
indexer is on prem. I am trying to use splunk to collect logs on from different pods managed in K8s. I tried Splunk App for Infrastructure, but it seems only collects metrics etc from different pods. Is there a way to collect logs that generated by the apps running in the pod? Those logs are normally in the dirs in a pod. Any help is appreciated and thanks in advance.
When searching for sourcetype=recorded future IOCS, i receive the following error. I updated the API key and that fixed the issue of not being able to authenticate but now I am receiving this error. ... See more...
When searching for sourcetype=recorded future IOCS, i receive the following error. I updated the API key and that fixed the issue of not being able to authenticate but now I am receiving this error. Is there somewhere within the config i need to stop the script from being started via Command line? No session key was provided by the Splunk server. This can happen if the script is started from the command line which is not supported. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-recorded_future/bin/get-rf-threatlists.py", line 186, in main session_key = rf_splunk.api_key.get_session_key() File "/opt/splunk/etc/apps/TA-recorded_future/bin/rf_splunk/api_key.py", line 85, in get_session_key raise MissingSessionKeyError('No session key was provided by the ' MissingSessionKeyError: No session key was provided by the Splunk server. This can happen if the script is started from the command line which is not supported.
I have a Dashboard for viewing activity from suspicious accounts. I currently use a multi-select input where I am running a report to find suspicious accounts. I want to by default view activity for ... See more...
I have a Dashboard for viewing activity from suspicious accounts. I currently use a multi-select input where I am running a report to find suspicious accounts. I want to by default view activity for all of the suspicious accounts. I can't use the * for the "all" selection because it would show the activity for every user, not the subgroup of "suspicious users". Is there a way to have the dashboard auto-populate with data for all users that show up on the report?
index=proxy domain=* | rename domain as emotet_domain | where [| inputlookup test | fields emotet_domain] | stats values(emotet_domain) as emotetDomain so inside the lookup li... See more...
index=proxy domain=* | rename domain as emotet_domain | where [| inputlookup test | fields emotet_domain] | stats values(emotet_domain) as emotetDomain so inside the lookup list i want to be able to match for example a threat of -- reason.com OR www.reason.com i added the matchtype option of WILDCARD(emotet_domain) AND I have also tried WILDCARD(domain) I am not sure whihc one will help wildcard it, but as of right now it is NOT working.
I have the following search based on this i just want to see unique values for the search index=one eventtype=one_tu | sort -time, ComputerName | dedup id |stat dc(id) as ID | search open... See more...
I have the following search based on this i just want to see unique values for the search index=one eventtype=one_tu | sort -time, ComputerName | dedup id |stat dc(id) as ID | search open=false | table Date, ComputerName, agentName, class,Content,id
I'm attempting to extract JSON into multiple events. I've read some other answers and attempted to test configurations using the Add Data feature and tweaking the settings. It seems like from what I'... See more...
I'm attempting to extract JSON into multiple events. I've read some other answers and attempted to test configurations using the Add Data feature and tweaking the settings. It seems like from what I've read, SHOULD_LINEMERGE should be set to false and there should be a LINE_BREAKER value to identify where to break the lines. Based on the sample data below I think it should break on the associatedItems field. I've attempted to modify + apply different values in the Add Data wizard but it doesn't seem to make a difference. Any ideas? This is the source type config I've attempted to use: [ _json ] SHOULD_LINEMERGE=false LINE_BREAKER={"associatedItems" NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=json KV_MODE=none TRUNCATE=1000000 category=Structured description=JavaScript Object Notation format. For more information, visit http://json.org/ disabled=false pulldown_type=true { "limit": 1000, "offset": 0, "records": [ { "associatedItems": [ { "id": "557058:8bc118d1-552f-4613-9b34-c15add9aad17", "name": "removed58", "parentId": "10000", "parentName": "IDP Directory", "typeName": "USER" } ], "category": "user management", "changedValues": [ { "changedFrom": "Active", "changedTo": "Inactive", "fieldName": "Active / Inactive" } ], "created": "2020-03-27T13:27:30.207+0000", "eventSource": "", "id": 17808, "objectItem": { "id": "557058:8bc118d1-552f-4613-9b34-c15add9aad17", "name": "removed58", "parentId": "10000", "parentName": "IDP Directory", "typeName": "USER" }, "summary": "User updated" }, { "associatedItems": [ { "id": "qm:6d90bc78-fd3b-4401-8720-31989454f2b7:5b1d2af1-eb27-4451-8461-575ae3c4a9a5", "name": "qm:6d90bc78-fd3b-4401-8720-31989454f2b7:5b1d2af1-eb27-4451-8461-575ae3c4a9a5", "parentId": "10000", "parentName": "IDP Directory", "typeName": "USER" } ], "authorKey": "testuser", "category": "user management", "changedValues": [ { "changedTo": "Active", "fieldName": "Active / Inactive" } ], "created": "2020-03-27T04:55:07.336+0000", "eventSource": "", "id": 17807, "objectItem": { "id": "qm:6d90bc78-fd3b-4401-8720-31989454f2b7:5b1d2af1-eb27-4451-8461-575ae3c4a9a5", "name": "qm:6d90bc78-fd3b-4401-8720-31989454f2b7:5b1d2af1-eb27-4451-8461-575ae3c4a9a5", "parentId": "10000", "parentName": "IDP Directory", "typeName": "USER" }, "summary": "User created" },
Hi. I need help to be able to unify 2 fields that have the same value, however, in separate searches. Here is an example: Search 1: source=* sourcetype=fortigate user=* action=tunnel-up rea... See more...
Hi. I need help to be able to unify 2 fields that have the same value, however, in separate searches. Here is an example: Search 1: source=* sourcetype=fortigate user=* action=tunnel-up reason="tunnel established"| table _time user group action remip reason tunnelip Search 2: source=* sourcetype=fortigate dstport=6443 action=close |table srcip, srcintf The "srcip" and "remip" fields have the same values. I need both searches together, to be able to use the user field of the first search in the second search.
Hello, I've been using Splunk for less than a year and I'm trying to know how to size Splunk deployment(hardware requirement). I've read the Splunk Capacity Planning manual and the admin guides bu... See more...
Hello, I've been using Splunk for less than a year and I'm trying to know how to size Splunk deployment(hardware requirement). I've read the Splunk Capacity Planning manual and the admin guides but would like to hear from people who have done it. 1800 clients in the environment 120GB/day(Splunk) So How many indexers and forwarders should I have for this project? as I have 1800 clients and using 120GB a day. Can one server do the job? Also, I am not sure if I should use universal or Heavy forwarder. But it seems like Universal is the right one. I'd appreciate any recommendations. Thanks in advance, George
I would like to display unique successful login users time chart report. my query is showing only today's results unique when I used '|dedup email' and wrong numbers showing for previous days. inde... See more...
I would like to display unique successful login users time chart report. my query is showing only today's results unique when I used '|dedup email' and wrong numbers showing for previous days. index=myindex "result"=SUCCESS |timechart span=day count | convert timeformat="%m/%e" Any thoughts? Regards Srini
I have the following raw data and I am trying to break the individual events starting with timestamp and before another timestamp starts . 2020-03-27 00:00:00.003 [quartzJobExecutor-1] INFO c.c... See more...
I have the following raw data and I am trying to break the individual events starting with timestamp and before another timestamp starts . 2020-03-27 00:00:00.003 [quartzJobExecutor-1] INFO c.c.c.r.c.s.r.i.BusinessAssetsForDataSetRecommenderDelegate - (dataset, asset): Starting model computation2020-03-27 00:00:00.033 [quartzJobExecutor-1] INFO c.c.c.r.c.s.r.i.BusinessAssetsForDataSetRecommenderDelegate - (dataset, asset): # data set ids: 9 / # events: 02020-03-27 00:00:00.050 [quartzJobExecutor-1] INFO c.c.d.c.s.StatisticSchedulerJob - StatisticScheduler saving <21> statistics for statistic 2020-03-27 00:00:00.050 [pool-9-thread-1] INFO c.c.d.core.statistic.StatisticBuffer - Persisting <21> statistics Thanks in Advance
Hi All, I am getting TailReader - 0 Red but but root cause and Last 50 related messages are blank. Do we need to enable anything to get the data here? Thanks
As in title, I was wondering if it is possible to use the same certificate on Heavy forwarders for access to the web UI and as a server cert for server forwarding. looking at here: https://docs.sp... See more...
As in title, I was wondering if it is possible to use the same certificate on Heavy forwarders for access to the web UI and as a server cert for server forwarding. looking at here: https://docs.splunk.com/Documentation/Splunk/8.0.1/Security/Howtogetthird-partycertificates and here: https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Getthird-partycertificatesforSplunkWeb The process for creating the CSRs is a bit different. In the web certificate the password is removed from the key before creating the CSR. would that make any difference on the final certificate? would I be able to use it for both functions? Is there something that I might need to be aware of?
I changed my Splunk environment in Azure to new servers including search peers. Is there a way to ignore days for the Azure Blob input? I have tried to use IgnoreOlderThan and it complained. It appe... See more...
I changed my Splunk environment in Azure to new servers including search peers. Is there a way to ignore days for the Azure Blob input? I have tried to use IgnoreOlderThan and it complained. It appears that this has to re-read the whole entire container. Thanks, ~John
Hello, I'm attempting to track AWS related password events in my Splunk. I am sifting through my index and receiving the data I need -- however I am having an issue with converting the "age" fr... See more...
Hello, I'm attempting to track AWS related password events in my Splunk. I am sifting through my index and receiving the data I need -- however I am having an issue with converting the "age" from a unix based notation. I am using the following to determine the age of passwords: | eval age =_time My output is as follows: PasswordLastUsed age 018448995162 user 2020-02-14T20:49:08+00:00 1585319203 018448995162 user 2020-02-13T16:59:30+00:00 1585319203 Is there a better way to convert the age output into a more readable format (i.e. days)? Thanks, Kiran
I have Splunk 7 with SSO enabled. Is there any way to extract certain value remoteUser header? For example, I want to extract from header with content "CN=cnvalue,O=MYORG,C=US" the value of CN, so Sp... See more...
I have Splunk 7 with SSO enabled. Is there any way to extract certain value remoteUser header? For example, I want to extract from header with content "CN=cnvalue,O=MYORG,C=US" the value of CN, so Splunk wiil use only "cnvalue" as remote user. I know this can be done by Apache, but is there any way to do it on Splunk's side?
We have users migrating apps (that were using Universal Forwarders) to docker containers. The Splunk logging driver for docker embeds the logged json items inside a 'line' object as per the sanitized... See more...
We have users migrating apps (that were using Universal Forwarders) to docker containers. The Splunk logging driver for docker embeds the logged json items inside a 'line' object as per the sanitized example below; these fields are not nested in 'line' when using a UF. There are a number of reports/dashboards/alerts built that won't work with the new logging solution because they're not expecting to have to reference a field with 'line.' - for example, line.port instead of just "port". The desired goal is to extract the json fields out of 'line' and place them back in _raw so the reports/dashboards will work with either implementation. Example (simplified) event: {"line":{"_t":"2020-03-27T03:17:25.491296Z","logger":"some.logger","level":"INFO","env":"dev","port":"8000","process_id":51,"thread_id":140005384098624,"hostname":"964619888c0d"},"source":"stdout","tag":"some.instance.tag"} I'm trying to build a props/transforms solution that extracts the json out of 'line' and places those fields back at the '_raw' event level. Here's what I have so far: local.meta [] export = system props.conf [docker_line_extract] REPORT-line = extract_line_object, extract_line_objects transforms.conf [extract_line_object] REGEX = {\"line\":{(?.*)}, [extract_line_objects] REGEX = \"(?<_KEY_1>[^="\]+)\":\s?\"?(?<_VAL_1>[^="\]*) FORMAT = $1::$2 SOURCE_KEY = field:lineobj DEST_KEY = _raw REPEAT_MATCH = true The above succeeds in extracting the json field/values out of 'line' - the 'lineobj' field appears in the fields list in Splunk Web; clicking one reveals the expected content: "_t":"2020-03-27T03:17:25.491296Z","logger":"some.logger","level":"INFO","env":"dev","port":"8000","process_id":51,"thread_id":140005384098624,"hostname":"964619888c0d" So that part is working. But I can't seem to get the json field/values extracted out of 'lineobj' and placed in the _raw event as desired - tried a lot of variations, no luck. Does anyone have some insights / solution? Thank you.
I have data in a CSV with below format.. 2 columns Date & count All I want is monthly average .. | timechart avg(count) span=1mon wouldn't work. Please guide. 2019-05-01 0 201... See more...
I have data in a CSV with below format.. 2 columns Date & count All I want is monthly average .. | timechart avg(count) span=1mon wouldn't work. Please guide. 2019-05-01 0 2019-05-02 0 2019-05-03 0 2019-05-04 0 2019-05-05 0 2019-05-06 0 2019-05-07 0 2019-05-08 136 2019-05-09 62208 2019-05-10 56432 2019-05-11 618 2019-05-12 5604 2019-05-13 130244 2019-05-14 152660 2019-05-15 137472 2019-05-16 147968 2019-05-17 130330 2019-05-18 1315 2019-05-19 1007 2019-05-20 137305 2019-05-21 165069 2019-05-22 145031 2019-05-23 135697 2019-05-24 139552 2019-05-25 390 2019-05-26 203 2019-05-27 196 2019-05-28 154001 2019-05-29 160133 2019-05-30 160315 2019-05-31 145855 2019-06-01 1540 2019-06-02 1471 2019-06-03 184624 2019-06-04 192080 2019-06-05 199334 2019-06-06 176690 2019-06-07 168967 2019-06-08 490 2019-06-09 1684 2019-06-10 206361 2019-06-11 227472 2019-06-12 212830 2019-06-13 214682 2019-06-14 176739 2019-06-15 338
Will the CB Response app be compatible with Splunk 8.x anytime soon? Or does anyone have a workaround for errors that I'm seeing about it being unable to parse nav XML - Unicode strings with enco... See more...
Will the CB Response app be compatible with Splunk 8.x anytime soon? Or does anyone have a workaround for errors that I'm seeing about it being unable to parse nav XML - Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. Any help is appreciated.
Having an issue with trying to drop a prefix before the username field in the Palo Alto app. The username has the prefix of 'foo\' before the user name. I checked the props.conf file in the app to se... See more...
Having an issue with trying to drop a prefix before the username field in the Palo Alto app. The username has the prefix of 'foo\' before the user name. I checked the props.conf file in the app to see the following stanza: # Set user field EVAL-user = coalesce(src_user,dest_user,"unknown") I created a regex that I tested on regex101 which worked perfectly ,foo\\(?<user>[^,]+), However, testing that regex in Splunk I get, "The regex '_raw=,foo(?[^,]+),' is invalid. Regex: unmatched closing parenthesis. Any suggestions on how to get rid of the prefix and just keep the user name?
This is an extension to my other question in https://answers.splunk.com/answers/812982/summary-of-stats-from-multiple-events-for-each-ide.html?minQuestionBodyLength=80 The input and output that I n... See more...
This is an extension to my other question in https://answers.splunk.com/answers/812982/summary-of-stats-from-multiple-events-for-each-ide.html?minQuestionBodyLength=80 The input and output that I need are in the screenshot below: I was able to use xyseries with below command to generate output with identifier and all the Solution and Applied columns for each status. However now I want additional 2 columns for each identifier which is: * StartDateMin - minimum value of StartDate for all events with a specific identifier * EndDateMax - maximum value of EndDate for all events with a specific identifier index = | | stats count by Identifier, TransactionType, Status | eval TransactionType = TransactionType." (".Status.")" | xyseries Identifier, TransactionType, count | fillnull value=0 How do I embed the new columns StartDateMin and EndDateMax with query modification of above query ? One option I can think of is to separately generate identifier with startdatemin, end datemin and then again identifier with other columns and then perform a join based on identifier but would involve repetition of lot of conditions two times. Is there an easy way out while using the above query to do the same ?