All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi folks, [Current scenario] When a role is created with capabilities, I am receiving one event for the role creation and each added capability is generated as an event. For example, one role with... See more...
Hi folks, [Current scenario] When a role is created with capabilities, I am receiving one event for the role creation and each added capability is generated as an event. For example, one role with five capabilities will produce six events in total with similar 'ID'. Event for role created: 2023-04-20T16:08:05,290 INFO [ID] 1234567:user - Added IdentityType=Role Name=<Role Name>, ObjId=<Object Id>. Events for capability added: 2023-04-20T16:12:07,020 INFO [ID] 1234567:user - Access Control change on ObjectType=<Object type>, Name=<Capability>, ObjId=<Object Id>. 2023-04-20T16:12:07,020 INFO [ID] 1234567:user - Access Control change on ObjectType=<Object type>, Name=<Capability>, ObjId=<Object Id>. 2023-04-20T16:12:07,020 INFO [ID] 1234567:user - Access Control change on ObjectType=<Object type>, Name=<Capability>, ObjId=<Object Id>. 2023-04-20T16:12:07,021 INFO [ID] 1234567:user - Access Control change on ObjectType=<Object type>, Name=<Capability>, ObjId=<Object Id> 2023-04-20T16:12:07,021 INFO [ID] 1234567:user - Access Control change on ObjectType=<Object type>, Name=<Capability>, ObjId=<Object Id>. My SPL: index=test |eval Info=case(Type="Role" AND Action="Added",'User'." "."has created the role named ".'Name'." with the following capabilities: ".'Capabilities') In the above I need the values of the five capability in the field(Capabilities). [Requirement] Any idea on how to include all the capability based on ID into a field called 'Capabilities'? Note:I dont want to use 'stats values()' directly in my main search.
i am using splunk dashboard studio using different numbers icons will load. if click main icon it will load data in down icons using tokens but how can i load downstream icon only when i clicked ... See more...
i am using splunk dashboard studio using different numbers icons will load. if click main icon it will load data in down icons using tokens but how can i load downstream icon only when i clicked on base icon
Hello Splunkers Im facing an issue with my indexer its crashing every 1-2 hours & sometimes suddenly  crashes after 10 minutes of restarting.  Indexer specs :  CentOS Linux 7 24 CPU RAM  1T S... See more...
Hello Splunkers Im facing an issue with my indexer its crashing every 1-2 hours & sometimes suddenly  crashes after 10 minutes of restarting.  Indexer specs :  CentOS Linux 7 24 CPU RAM  1T SSD  Splunk Version : Splunk 8.2.1 (build ddff1c41e5cf)   Crash logs :  Received fatal signal 6 (Aborted) on PID 19932. Cause: Signal sent by PID 19932 running under UID 1000. Crashing thread: tailreader0 Registers: RIP: [0x00002B87E7282277] gsignal + 55 (libc.so.6 + 0x36277) RDI: [0x0000000000004DDC] RSI: [0x0000000000004EF5] RBP: [0x00002B87E73D6580] RSP: [0x00002B880E3FF608] RAX: [0x0000000000000000] RBX: [0x00002B87E5F9C000] RCX: [0xFFFFFFFFFFFFFFFF] RDX: [0x0000000000000006] R8: [0x0000000000000090] R9: [0x00002B87E7800080] R10: [0x0000000000000008] R11: [0x0000000000000202] R12: [0x000055EFC1BE60C8] R13: [0x000055EFC1BE6098] R14: [0x00002B87E5EAD8C8] R15: [0x00002B880E88E930] EFL: [0x0000000000000202] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x0000000000000033] OLDMASK: [0x0000000000000000] OS: Linux Arch: x86-64 Backtrace (PIC build): [0x00002B87E7282277] gsignal + 55 (libc.so.6 + 0x36277) [0x00002B87E7283968] abort + 328 (libc.so.6 + 0x37968) [0x00002B87E727B096] ? (libc.so.6 + 0x2F096) [0x00002B87E727B142] ? (libc.so.6 + 0x2F142) [0x000055EFBF459410] ? (splunkd + 0x131A410) [0x000055EFBFB3B102] _ZN3WTF23quickCheckForRolledFileERK8Pathname + 210 (splunkd + 0x19FC102) [0x000055EFBFB3B947] _ZN3WTF13loadFishStateEP11PipelineSetb + 855 (splunkd + 0x19FC947) [0x000055EFBFB300E8] _ZN10TailReader8readFileER15WatchedTailFile + 200 (splunkd + 0x19F10E8) [0x000055EFBFB303A0] _ZN10TailReader4readEP15WatchedTailFileP11TailWatcher + 208 (splunkd + 0x19F13A0) [0x000055EFBFB30D32] _ZN10TailReader10handleFileEP15WatchedTailFileP11TailWatcher + 514 (splunkd + 0x19F1D32) [0x000055EFBF91F57A] _ZN12ReaderThread4mainEv + 746 (splunkd + 0x17E057A) [0x000055EFC07F4C47] _ZN6Thread8callMainEPv + 135 (splunkd + 0x26B5C47) [0x00002B87E7037E25] ? (libpthread.so.0 + 0x7E25) [0x00002B87E734ABAD] clone + 109 (libc.so.6 + 0xFEBAD) Linux / SRV-HO-SPLUNKIDX / 3.10.0-862.11.6.el7.x86_64 / #1 SMP Tue Aug 14 21:49:04 UTC 2018 / x86_64 Libc abort message: splunkd: /opt/splunk/src/pipeline/input/WatchedTailFile.cpp:249: void WTF::assertAndDump(bool, c onst Str&) const: Assertion `0 && "See splunkd.log for crash reason."' failed. /etc/redhat-release: CentOS Linux release 7.5.1804 (Core) glibc version: 2.17 glibc release: stable Last errno: 2 Threads running: 96 Runtime: 8926.636142s argv: [splunkd -p 8089 start] Regex JIT enabled RE2 regex engine enabled using CLOCK_MONOTONIC Thread: "tailreader0", did_join=0, ready_to_run=Y, main_thread=N, token=47863354623744 MutexByte: MutexByte-waiting={none} ReaderThread: mode=0, queueSize=14, shutdown=N, reconfigure=N, mode=0 Reading File-WatchedTailFile-WatchedFileState: path="/opt/splunk/var/log/introspection/resource_usage.log", flags=0x1 0000EB, alive First 144 bytes of PathnameStat @0x2b880e890828: 00000000 00 fd 00 00 00 00 00 00 2d 96 0e 08 00 00 00 00 |........-.......| 00000010 01 00 00 00 00 00 00 00 80 81 00 00 e8 03 00 00 |................| 00000020 e8 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000030 66 48 00 00 00 00 00 00 00 10 00 00 00 00 00 00 |fH..............| 00000040 28 00 00 00 00 00 00 00 5a 49 52 64 00 00 00 00 |(.......ZIRd....| 00000050 9e 6a 2f 0b 00 00 00 00 5a 49 52 64 00 00 00 00 |.j/.....ZIRd....| 00000060 44 ae 3e 0b 00 00 00 00 5a 49 52 64 00 00 00 00 |D.>.....ZIRd....| 00000070 44 ae 3e 0b 00 00 00 00 00 00 00 00 00 00 00 00 |D.>.............| 00000080 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000090 FilesystemChangeWatcher: _timeoutActive=N, _throttled=N, _waitingForNotifyCount=18 EMPTY Q: waitingForTimeout=N, noAction=N, stat=Y, immediateStat=Y, readdir=Y, notify=Y USING INOTIFY: wds=6, score(0xFD00)=999, hasScaledTImeouts=Y Timeout: _when = 511211.936945614, _initialInterval = 3.000 file-in: _initialized=Y, _lastCharWasNewline=Y, _lastReadHadNulls=N, _wasCrcConflict=N, _warned=N _nullsWarned=N, _wasTooNew=N, _exists=Y, _noDebug=N _hadExplicitSource=N, _crossedInitCrcLenBoundary=N, _classifiedAtLeastOnce=Y, _fileReplaced=Y, _readPathAfte rRealEOF=Y _onlyNotifiedOnce=N, _isArchive=N, _isCached=343536, _unowned=N, _deleteOnEOF=N _overrideDeleteOnEOF=N, _doNotDeleteChildren=N, _readFromEnd=N, _readIrregardless=N _fileCheckMethod=0, _crcSalt=<null>, _origPath=<null> _bytesRead=25000259, _storingBytesRead=0, _initCrc=0x56aefe7f2a71345b, _seekCrc=0xa8957fe5632ae3b _filenameCrc=0x55d3f47641cff9b5, _fallbackCrc=0x0, _lastEOFTime=1683114330.495657534948, _modTime=1683114330 .495656545355 _eofInterval=3.000, _ignoreThresh=0.000, _initCrcBytes=256, _initCrcForBatch=0x0 _pendingMetadata=<null> _prevFd=331, _pdModels=[1 PD: [PD: flags=0x1540030, [_path] = "/opt/splunk/var/log/introspection/resource_us age.log", [_MetaData:Index] = "_introspection", [MetaData:Source] = "source::/opt/splunk/var/log/introspection/resour ce_usage.log", [MetaData:Host] = "host::SRV-HO-SPLUNKIDX", [MetaData:Sourcetype] = "sourcetype::splunk_resource_usage ", [_hpn] = "_hpn", [_charSet] = "UTF-8", [_conf] = "source::/opt/splunk/var/log/introspection/resource_usage.log|hos t::SRV-HO-SPLUNKIDX|splunk_resource_usage|4982", [_channel] = "4982"]] _rescheduleDelay=1.000, _rescheduleFresh=Y, _name=/opt/splunk/var/log/introspection/resource_usage.log, _sta tusName= _st=[dev=64768, ino=135173677, mode=100600, size=18534, mtime=1683114330, owner=1000, group=1000] _toStringPrefix=state=0x0x2b880e890780, _backoff=0 _stdataInputHeaderProcessing=[] _detectTrailingNulls=N, _detectReadingFromOffSet=Y, _readAndSkipHeader=N, _uniqueId=4982 _rawPath=$SPLUNK_HOME/var/log/introspection   x86 CPUID registers: 0: 00000014 756E6547 6C65746E 49656E69 1: 000406F1 1E010800 FFFA3203 0F8BFBFF 2: 76036301 00F0B5FF 00000000 00C30000 3: 00000000 00000000 00000000 00000000 4: 00000000 00000000 00000000 00000000 5: 00000000 00000000 00000000 00000000 6: 00000004 00000000 00000000 00000000 7: 00000000 00000000 00000000 00000000 8: 00000000 00000000 00000000 00000000 9: 00000000 00000000 00000000 00000000 A: 07300401 0000007F 00000000 00000000 B: 00000000 00000000 0000009D 0000001E C: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 E: 00000000 00000000 00000000 00000000 F: 00000000 00000000 00000000 00000000 10: 00000000 00000000 00000000 00000000 11: 00000000 00000000 00000000 00000000 12: 00000000 00000000 00000000 00000000 13: 00000000 00000000 00000000 00000000 14: 00000000 00000000 00000000 00000000 80000000: 80000008 00000000 00000000 00000000 80000001: 00000000 00000000 00000121 2C100800 80000002: 65746E49 2952286C 6F655820 2952286E 80000003: 55504320 2D354520 30323632 20347620 80000004: 2E322040 48473031 0000007A 00000000 80000005: 00000000 00000000 00000000 00000000 80000006: 00000000 00000000 01006040 00000000 80000007: 00000000 00000000 00000000 00000100 80000008: 0000302B 00000000 00000000 00000000 terminating... Correlate the crash with splunkd.log :  05-03-2023 14:46:00.547 +0300 ERROR WatchedFile [20213 tailreader0] - About to assert due to: should have gotten back a record from fishbucket: state=0x0x2b880e890780 wtf=0x0x2b880e88e800 off=25000259 initcrc=0x56aefe7f2a71345b scrc=0 xa8957fe5632ae3b fallbackcrc=0x0 last_eof_time=1683114330 reschedule_fresh=Y is_cached=343536 fd_valid=true exists=tr ue last_char_newline=true on_block_boundary=false only_notified_once=false was_replaced=true eof_seconds=3 delay_done key_until_close=false unowned=false always_read=false was_too_new=false name="/opt/splunk/var/log/introspection/resource_usage.log"
Hello, I have an issue with my Splunk Universal Forwarder as it keeps randomly stopping on Windows server without any explanation. The error found in the logs is "The SplunkForwarder Service service ... See more...
Hello, I have an issue with my Splunk Universal Forwarder as it keeps randomly stopping on Windows server without any explanation. The error found in the logs is "The SplunkForwarder Service service terminated unexpectedly.  It has done this 1 time(s)." Does anyone have any idea what might be causing this issue?
We have an issue where all the scheduled searches are getting skipped whenever rolling restart is in progress.  Also, since few weeks, we have observed that the cluster master automatically initiat... See more...
We have an issue where all the scheduled searches are getting skipped whenever rolling restart is in progress.  Also, since few weeks, we have observed that the cluster master automatically initiates a rolling restart of the indexers, twice in a week. It takes about 24 hours to restart all the 24 indexers in a cluster, which impacts our business too.  Has anyone ever incurred this situation before? 
Currently using TA-ms-teams-alert-action to send message cards to MS Teams. As I understood it only sends results of fields. I wanted to add text field containing 1-2 lines of description.  It can be... See more...
Currently using TA-ms-teams-alert-action to send message cards to MS Teams. As I understood it only sends results of fields. I wanted to add text field containing 1-2 lines of description.  It can be added as a new field and be sent. But it sends it using "facts", not a text field. Tried to add it to the "Message fields list" using some escape characters, unfortunately it did not work. Now it looks like: Should be:  
Hello  I have dropdown element in my dashboard with 3 options as mentioned below.   { "options": { "items": [ { "label": "Both", "value":... See more...
Hello  I have dropdown element in my dashboard with 3 options as mentioned below.   { "options": { "items": [ { "label": "Both", "value": "*" }, { "label": "dublin", "value": "*DDUUBBLLIN*" }, { "label": "singapore", "value": "SSIINNGG" } ], "defaultValue": "*", "token": "datacenter" }, "title": "data-center", "type": "input.dropdown" }   I want to pass the label name in the title of the line graph: If I use token-name with dollar sign like this: $datacenter$ it displays selected label's value: transaction per hour at DDUUBBLLIN datacenter - when the user selects Dublin However I want to display the label name: transaction per hour at Dublin datacenter - when the user selects Dublin transaction per hour at Singapore datacenter - when the user selects Singapore transaction per hour at both datacenter - when the user selects Both
When we "Schedule a pdf delivery" Is it possible to print dashboard result in Mail Body instead of a pdf attachment?
Hello, I have a lookup table with numbers, where it checks the numbers that match the error_code 11. index="cdrs" "error_code"="11" "Destino"="*" | lookup DIDREPEP Destino OUTPUT Destino | ... See more...
Hello, I have a lookup table with numbers, where it checks the numbers that match the error_code 11. index="cdrs" "error_code"="11" "Destino"="*" | lookup DIDREPEP Destino OUTPUT Destino | table Destino But it shows some blank results because they are not in the lookup table. How can I do so that it only shows me the destination that is not in the search table?. thanks greetings.
hi, I am not getting data of AWS in Splunk for AWS app. Mentioned configuration done. Also splunk_role created with mentioned Policies& attached to splunk instance. Also tried with new splunk... See more...
hi, I am not getting data of AWS in Splunk for AWS app. Mentioned configuration done. Also splunk_role created with mentioned Policies& attached to splunk instance. Also tried with new splunk user created with admin access & attached same policy, still data not reflecting. policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeReservedInstances", "ec2:DescribeSnapshots", "ec2:DescribeRegions", "ec2:DescribeKeyPairs", "ec2:DescribeNetworkAcls", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeVolumes", "ec2:DescribeVpcs", "ec2:DescribeImages", "ec2:DescribeAddresses", "lambda:ListFunctions", "rds:DescribeDBInstances", "cloudfront:ListDistributions", "iam:GetUser", "iam:ListUsers", "iam:GetAccountPasswordPolicy", "iam:ListAccessKeys", "iam:GetAccessKeyLastUsed", "iam:GetPolicyVersion", "iam:ListUserPolicies", "iam:ListAttachedUserPolicies", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeInstanceHealth", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:DescribeListeners", "s3:ListAllMyBuckets", "s3:GetAccelerateConfiguration", "s3:GetBucketCORS", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:GetBucketLogging", "s3:GetBucketTagging" ], "Resource": [ "*" ] } ] }  
Hi, all! We want to install Splunk UFs to about 200 Windows Servers, process should be centralized and secured. 1. We tried install by .bat script with required fields as LOGON_USERNAME, LOGON_PASSW... See more...
Hi, all! We want to install Splunk UFs to about 200 Windows Servers, process should be centralized and secured. 1. We tried install by .bat script with required fields as LOGON_USERNAME, LOGON_PASSWORD and etc. But therein lies the problem, anyone can see user credentials. 2. Then we tried change .msi installer file properties, but here is the same problem, anyone can open .msi file with msi redactor and see credentials. We want to install Splunk UF centralized and so that no one can see/get credentials. Can you recommend any way to realize this process?
Suppose I have data as follows: | makeresults | eval a = mvappend(a, "\"1\"") | eval a = mvappend(a, "\"2\"") | eval a = mvappend(a, "\"3\"") | eval a = mvappend(a, "\"4\"") | eval a = mvappen... See more...
Suppose I have data as follows: | makeresults | eval a = mvappend(a, "\"1\"") | eval a = mvappend(a, "\"2\"") | eval a = mvappend(a, "\"3\"") | eval a = mvappend(a, "\"4\"") | eval a = mvappend(a, "\"5\"") | eval b= mvjoin(a, ",") | table a, b Using SPL, how can I (or is it even possible to) make arrays out of a and b so that I have fields c=["1","2","3","4","5"]  and d=["1","2","3","4","5"] where c is an array of 5 quoted numerical values and d is an array of 1 single string of the comma separated string: "1","2","3","4","5" I know I can get arrays of values via extraction from JSON but is there a way to do this without JSON?
I am currently running a query that is quite inefficient, and it fails when run for extended periods. Splunk only allows me to run it every 15 minutes, which is a limitation I have as a work arou... See more...
I am currently running a query that is quite inefficient, and it fails when run for extended periods. Splunk only allows me to run it every 15 minutes, which is a limitation I have as a work around for now. Unfortunately, I can't improve the situation because I don't have control over the log creation process as the logs come from a third party system Currently, I'm running the query as an alert every 15 minutes, but I'd like to capture these metrics over a more extended period, such as 24 hours or more. Is there a built-in mechanism in Splunk that can store and append the results of these queries for future reference? I am aware that I could use the Java SDK to extract and aggregate the metrics outside of Splunk, but that approach is not ideal.
Supposing I have two events which have a JSON field "groups" as 1 or multiple lists of name value pairs. So there can be 1...N lists of name value pairs and 1...N name, value pairs within one list. I... See more...
Supposing I have two events which have a JSON field "groups" as 1 or multiple lists of name value pairs. So there can be 1...N lists of name value pairs and 1...N name, value pairs within one list. I asked the following question https://community.splunk.com/t5/All-Apps-and-Add-ons/How-can-I-parse-key-value-pairs-from-JSON/m-p/642190#M79096 to find out how to parse one list of 1...N name value pairs. Now I am trying to figure out how to parse events where it is possible to have 1...N lists of 1...N name value pairs.  Example event 1:   groups = [ {"name1":"id1", "name2":"id2", "name3":"id3", "name4":"id4"}, { "name10":"id10", "name11": "id11"} ]   event 2:  groups = [ { "name20":"id20", "name21": "id21", "name22":"id22", "name23": "id23"}]   How would I extract the data so that I have the data in fields names and ids as follows:   names ids event 1  ["name1", "name2", "name3", "name4"] ["name10", "name11"]  ["id1", "id2", "id3", "id4"] ["id10", "id11"] event 2 ["name20", "name21", "name22", "name23"] ["id20", "id21", "id22", "id23"]   You can use this to create the example data:  | makeresults format=json data="[{\"groups\"[{\"name1\":\"id1\",\"name2\":\"id2\",\"name3\":\"id3\",\"name4\":\"id4\"},  {\"name10\":\"id10\",\"name11\":\"id11\"}]}, {\"groups\":[{\"name20\":\"id20\",\"name21\":\"id21\",\"name22\":\"id22\",\"name23\":\"id23\"}]}]" Thank you so much for any help you can give in advance!    
What is the purpose for the stats command and how is it used for effective means?
Hi,  Can someone please help me to build a table using following JSON My search results  as follows      { [-] docker: { [+] } kubernetes: { [+] } log: LOGGER {"name":"some t... See more...
Hi,  Can someone please help me to build a table using following JSON My search results  as follows      { [-] docker: { [+] } kubernetes: { [+] } log: LOGGER {"name":"some text here","pathname":"/some/path","timestamp":"2023-05-03T20:35:06Z","action":"pageview","payload":{"category":"cloths","country":"US","appEnv":"production"},"uID":"0023493543"} stream: stdout }     From this I would like draw the table as  uID pathname category eventName country 0023493543 /some/path cloths some text here US Thanks in advance
I have a large organization and a dashboard to handle all enterprise scan data for one of our scan tools.  We have all scan data assigned to a project name (we have hundreds of project names).  How d... See more...
I have a large organization and a dashboard to handle all enterprise scan data for one of our scan tools.  We have all scan data assigned to a project name (we have hundreds of project names).  How do I create a process where I can populate a list to Splunk daily with the Active Directory Security group name mapping to a project name.  I have a single dashboard (cannot create multiple dashboards or indexes for this) and I have a drop-down box that has a list of projects.  I want my users to be able to access the dashboard but only be able to filter for the apps that they are a member of through the Active Directory security groups list.  My goal is a scenario where the scan team can maintain their own access to the scan data in the dashboard without making Splunk Admins do programming every time there's an addition or change to the list.  I also want to ensure projects don't see scan results for other projects.  We scan thousands of systems, so it is not an option to create multiple dashboards or indexes for this data. Thank you so much for your time. 
I have a list of events that happened over the last couple of weeks but the will be appended as it will be ran each week. I would like to create a search to show the number week of the year correspon... See more...
I have a list of events that happened over the last couple of weeks but the will be appended as it will be ran each week. I would like to create a search to show the number week of the year corresponding with the event.  I was was thinking something like this, but it applies to all events regardless of the time stamp.    | eval Current_Week=tonumber(strftime(now(),"%V"))   I understand I need to add more to the eval command to compare time of the event. 
We are about to change our User Principal Name suffix of all our users to go from a .org  to a .com account. For the test users we have switched they can still log into splunk, but they no longer own... See more...
We are about to change our User Principal Name suffix of all our users to go from a .org  to a .com account. For the test users we have switched they can still log into splunk, but they no longer own their previous alerts, dashboards and reports.  We have found a way to manually fix these individually, but was hoping there was a process to do this in bulk. We are running Splunk Enterprise 9.0.0 Thanks
i have a script that is currently executing on all search heads.  Is there a way to execute on only the current captain? I need only one of my search heads to return results.