All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi! i have configred ubuntu machine to send authentication log to my splunk instance using syslog. But i found just the failed auth logs and other logs with the field " pam_unix(cron:session)". i ... See more...
Hi! i have configred ubuntu machine to send authentication log to my splunk instance using syslog. But i found just the failed auth logs and other logs with the field " pam_unix(cron:session)". i am looking for the successful authentication that have the field "pam_unix(login:session)" and i find just pam_unix(sudo:session) Do i need to use add-on for unix and linux ?  Any advice, tips, or resources you can provide will be highly appreciated Thank you
I am trying to create a dependent dropdown based on the first dropdown.  There is a list of users who have multiple usernames and each username has a specific id and a panel needs to be populated ba... See more...
I am trying to create a dependent dropdown based on the first dropdown.  There is a list of users who have multiple usernames and each username has a specific id and a panel needs to be populated based on the id. Users: ABC, DEF, GHI, JKL Usernames ABC: aaa, bbb, ccc DEF: ddd, eee GHI: ggg JKL: jjj, kkk ID aaa: 1111 bbb: 2222 ccc: 3333 ddd: 4444 eee: 5555 ggg: 6666 jjj: 7777 kkk: 8888 Once the user is selected in the first dropdown, the second dropdown should only show the usernames for that particular user. Any help will be really appreciated.
Hi Guys, I met a strange issue in my on-premise environment.  At the beginning, it appeared to be missing link between the API gateway tier and the HTTP API server tier on the flow map, and no BT is... See more...
Hi Guys, I met a strange issue in my on-premise environment.  At the beginning, it appeared to be missing link between the API gateway tier and the HTTP API server tier on the flow map, and no BT is shown for the API server tier.  Both tiers are java applications.   I tried to peek with live preview, and seemed the agent for HTTP API server is working just fine, sending auto-detected transactions back to controller. But all of them are masked by a strange "Business Transaction: Not found (id 124)".  I feel this might be the reason preventing BTs of API server tier being shown. Can anyone shed me some light on this? How to dismiss such "ghost" BT?  Thanks very much. In case env information is needed: AppDynamics Controller build 23.4.0-10019, running on RHEL 7.9. Java agent :Server Agent #23.6.0.34839 v23.6.0 GA compatible with 4.4.1.0 r8e3a55588eed933a4f6bb44c4dec9edc8c25073f release/23.6.0 
Hi Legends, Need help in displaying start time, when error occurred and end time when it got resolved , in separate column. Currently they are getting displayed in same column like below : statu... See more...
Hi Legends, Need help in displaying start time, when error occurred and end time when it got resolved , in separate column. Currently they are getting displayed in same column like below : status Date Time REASON_CODE FAILED 25/04/2023 25/04/2023 20:33 Z910 FAILED 25/04/2023 25/04/2023 20:11 Z910 FAILED 25/04/2023 25/04/2023 3:38 Z911 FAILED 25/04/2023 25/04/2023 3:37 Z911 FAILED 25/04/2023 25/04/2023 3:37 Z911 FAILED 25/04/2023 25/04/2023 3:36 Z911   Please let me know how can i modify my query to display results like below: Status Date Start Time End Time REASON_CODE Count FAILED 25/04/2023 25/04/2023 20:11 25/04/2023 20:33 Z910 2 FAILED 25/04/2023 25/04/2023 3:36 25/04/2023 3:38 Z911 4   My Query : index=test sourcetype="*" STATUS_REASON_CODE IN (U220, U902, U904, U905, Z704, Z900, Z902, Z903, Z904, Z910, Z911, Z912, Z913, Z914, Z920, Z922, Z923, Z924) STATE = FAILED | fields STATE _time STATUS_REASON_CODE | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(_time) AS Time | convert timeformat="%Y-%m-%d" ctime(_time) AS TimeDay | eval FailTime=case(field_name="Failure Time", _time) | eval ReasonCode=case(field_name="Reason Code", STATUS_REASON_CODE) | eval State=case(field_name="State", STATE) | eval minTime = (min(Time)) | rename STATUS_REASON_CODE as REASON_CODE | sort - Time | table STATE TimeDay minTime REASON_CODE
I want to get the result of  'AccessControlRuleName' in a separate field set using REGEX.  Sample log: "AccessControlRuleName: PCIWAN_Access_In_#4-no-lookup,"    What would be the Regex query or ... See more...
I want to get the result of  'AccessControlRuleName' in a separate field set using REGEX.  Sample log: "AccessControlRuleName: PCIWAN_Access_In_#4-no-lookup,"    What would be the Regex query or detail to create a new field set for the above? Need some help on this.
Hi I have a field time called LastLogonDate with this format 6/28/2023 1:47.35 PM I want to format this field in a new field  So i am doing | eval Last=strftime(LastLogonDate, "%d-%m-%y") but it... See more...
Hi I have a field time called LastLogonDate with this format 6/28/2023 1:47.35 PM I want to format this field in a new field  So i am doing | eval Last=strftime(LastLogonDate, "%d-%m-%y") but it doenst works  What is wrong please?  
Hi Experts, Need help in displaying data , Currently i am able to display search data as Status Date                   Date & Time              REASON_CODE FAILED 25/04/2023 25/04/2023 20:33 Z... See more...
Hi Experts, Need help in displaying data , Currently i am able to display search data as Status Date                   Date & Time              REASON_CODE FAILED 25/04/2023 25/04/2023 20:33 Z910 FAILED 25/04/2023 25/04/2023 20:11 Z910 FAILED 25/04/2023 25/04/2023 3:38 Z911 FAILED 25/04/2023 25/04/2023 3:37 Z911 FAILED 25/04/2023 25/04/2023 3:37 Z911 FAILED 25/04/2023 25/04/2023 3:36 Z911 using below query : ============================================ index=test sourcetype="*" STATUS_REASON_CODE IN (U220, U902, U904, U905, Z704, Z900, Z902, Z903, Z904, Z910, Z911, Z912, Z913, Z914, Z920, Z922, Z923, Z924) STATE = FAILED | fields STATE _time STATUS_REASON_CODE | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(_time) AS Time | convert timeformat="%Y-%m-%d" ctime(_time) AS TimeDay | eval FailTime=case(field_name="Failure Time", _time) | eval ReasonCode=case(field_name="Reason Code", STATUS_REASON_CODE) | eval State=case(field_name="State", STATE) | eval minTime = (min(Time)) | rename STATUS_REASON_CODE as REASON_CODE | sort - Time | table STATE TimeDay minTime REASON_CODE ================================================== Need help in displaying data as : Status    Date                     Start Time                  End Time                    REASON_CODE Count FAILED   25/04/2023   25/04/2023 20:11 25/04/2023 20:33    Z910                  2 FAILED   25/04/2023   25/04/2023 3:36    25/04/2023 3:38      Z911                   4 Please help in modifying my query.
I have the following query,   index="xxxx" source="*$Device_ID$*xxxx*" | eval Device_ID=mvindex(split(source,"/"),5) | rex field=_raw "(?<timestamp>[^|]+)" | table Device_ID timestamp | streamsta... See more...
I have the following query,   index="xxxx" source="*$Device_ID$*xxxx*" | eval Device_ID=mvindex(split(source,"/"),5) | rex field=_raw "(?<timestamp>[^|]+)" | table Device_ID timestamp | streamstats count as s_no by Device_ID | sort 0 - s_no | table Device_ID s_no timestamp How to use map or foreach command for the above query so it will run separately for each Deviceid
Hello. I want to extract strings anything comes before "|" . ex. Math | Math | Science | Math English | Math Science | Science | Science | Science Expected result: Math Math English Science... See more...
Hello. I want to extract strings anything comes before "|" . ex. Math | Math | Science | Math English | Math Science | Science | Science | Science Expected result: Math Math English Science Below search did not worked. my search | stats count by Subject="(?<Subject>[^\|]+)" Please help me out.  
Hi,   Has anyone an idea what could be the reason why before an update was able to run a query and get correct results but after updating to  8.2.9 getting random results. The data is in the event ... See more...
Hi,   Has anyone an idea what could be the reason why before an update was able to run a query and get correct results but after updating to  8.2.9 getting random results. The data is in the event data, I can find the specific data if I specify for one specific but if I run the query I can get 1 result, 15 results, 42 results so on. Running the same query within the same timeframe.  We have over 1500 Indexes and it seems to only be issue with one specific index. It does seem odd that when running it the data is there if I use specific user=123 instead of using user=* but then I would only get results for user 123. I tried even adding user=123 OR user=* does not change anything just random results. Could it be something that needs to be cleared or something? Has anyone seen this before?   index=ABC operation=Paymentcompleted PAYMENT_METHOD=* user=* firstName=* lastName=* jurisdiction=UK AMOUNT=* country=GB | dedup user | eval NameofPayer = FIRST_NAME." ".LAST_NAME | eval NameofCust = firstName." ".lastName | eval NameofCust=upper(NameofCust) | eval NameofPayer=upper(NameofPayer) | where NOT match(NameofPayer,NameofCust) | stats list(NameofPayer) as NameofPayer, list(NameofCust) as NameofCust by user | fieldformat Time = strftime(Time, "%Y-%m-%d %H:%M:%S")   Running Stats list, values or tables does not make a difference to the random results. While this should be over 140 results. Thank you in advanced,
Hi all, I use splunk enterprise with the free license at home. This week I upgraded to version 9.1.0.1 and I was happy to see "search" is now available in dark mode as well. It can be set in the use... See more...
Hi all, I use splunk enterprise with the free license at home. This week I upgraded to version 9.1.0.1 and I was happy to see "search" is now available in dark mode as well. It can be set in the user preferences. But... the free license has just one user, and user preferences is not available for that user it seems. Is there a way to acivate dark mode for search with the free license? Bart.
Hi Experts, We have recently installed Heavy Forwarder and disabled the indexing on it and also we are not forwarding any data from forwarders as of now but all the queue are full in HWF. Don't unde... See more...
Hi Experts, We have recently installed Heavy Forwarder and disabled the indexing on it and also we are not forwarding any data from forwarders as of now but all the queue are full in HWF. Don't understand how HWF is full simply without getting any data. Please suggest how to clear them and make it as normal. Regards, Eshwar  
hi I have a question concerning the license volume usage  if a company ingest data with an UF but also with WinRM or Syslog, does the license usage volume concern just the data collected by the UF ... See more...
hi I have a question concerning the license volume usage  if a company ingest data with an UF but also with WinRM or Syslog, does the license usage volume concern just the data collected by the UF or also the data collected with WinRM and Syslog? Thanks
Hello Everyone, I have tried multiple times but i am unable to break event before the log_level(INFO and WARNING) as in below logs. Could you please help me break below logs into events starting wit... See more...
Hello Everyone, I have tried multiple times but i am unable to break event before the log_level(INFO and WARNING) as in below logs. Could you please help me break below logs into events starting with log_level?
Is there a way to make the SAML Group name be human readable name of the groups as they appear in Azure instead of the Object IDs? So it's easier to understand which Group has access to what Role? T... See more...
Is there a way to make the SAML Group name be human readable name of the groups as they appear in Azure instead of the Object IDs? So it's easier to understand which Group has access to what Role? Thank you!
Splunk universal forwarder crashes here are crash logs: [build de405f4a7979] 2023-07-10 17:31:30 Received fatal signal 11 (Segmentation fault) on PID 3013854. Cause: No memory mapped at ad... See more...
Splunk universal forwarder crashes here are crash logs: [build de405f4a7979] 2023-07-10 17:31:30 Received fatal signal 11 (Segmentation fault) on PID 3013854. Cause: No memory mapped at address [0x0000000000000080]. Crashing thread: parsing Registers: RIP: [0x00007FBC41EDEA74] __pthread_mutex_lock + 4 (libpthread.so.0 + 0xAA74) RDI: [0x0000000000000070] RSI: [0x00007FBC3E21A0B0] RBP: [0x00007FBC2FDFD980] RSP: [0x00007FBC2FDFD8C8] RAX: [0x0000558B2F9877E0] RBX: [0x0000000000000000] RCX: [0x0000000000000000] RDX: [0x00007FBC2FDFD8F8] R8: [0x0000000000000000] R9: [0x00007FBC41200080] R10: [0x00000000000000A3] R11: [0x0000000000000000] R12: [0x0000000000000001] R13: [0x0000000000000070] R14: [0x00007FBC2FDFD8F0] R15: [0x0000558B2F9877D0] EFL: [0x0000000000010202] TRAPNO: [0x000000000000000E] ERR: [0x0000000000000004] CSGSFS: [0x002B000000000033] OLDMASK: [0x0000000000000000] OS: Linux Arch: x86-64 Backtrace (PIC build): [0x00007FBC41EDEA74] __pthread_mutex_lock + 4 (libpthread.so.0 + 0xAA74) [0x0000558B2CE030D9] _ZN16PthreadMutexImpl4lockEv + 9 (splunkd + 0x2DD20D9) [0x0000558B2CD3ED27] _ZN9EventLoop20internal_runInThreadEP13InThreadActorb + 103 (splunkd + 0x2D0DD27) [0x0000558B2CB7B19A] _ZN11Distributed11EloopRunner3runEPNS_15EloopRunnerTaskE + 170 (splunkd + 0x2B4A19A) [0x0000558B2C02A6A6] _ZN18TcpOutputProcessor7executeER15CowPipelineData + 230 (splunkd + 0x1FF96A6) [0x0000558B2C7B1B29] _ZN9Processor12executeMultiER18PipelineDataVectorPS0_ + 73 (splunkd + 0x2780B29) [0x0000558B2BDA03A2] _ZN8Pipeline4mainEv + 1074 (splunkd + 0x1D6F3A2) [0x0000558B2CE02DAD] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 13 (splunkd + 0x2DD1DAD) [0x0000558B2CE03CA2] _ZN6Thread8callMainEPv + 178 (splunkd + 0x2DD2CA2) [0x00007FBC41EDC1CF] ? (libpthread.so.0 + 0x81CF) [0x00007FBC4146ADD3] clone + 67 (libc.so.6 + 0x39DD3)
 I think my question is --Is the Search overall returning the SRC filed the way it does because  either A there is no data or B filling in from the search and the search needs to be changed. Thi... See more...
 I think my question is --Is the Search overall returning the SRC filed the way it does because  either A there is no data or B filling in from the search and the search needs to be changed. This is a tstats search from either infosec or enterprise security. What should I change or do I need to do  something different.
https://docs.splunk.com/Documentation/AddOns/released/F5BIGIP/Setup   I have two issues: 1. In the Splunk docs, the provided log format for DNS logging is prefixed with "<190>". I believe this num... See more...
https://docs.splunk.com/Documentation/AddOns/released/F5BIGIP/Setup   I have two issues: 1. In the Splunk docs, the provided log format for DNS logging is prefixed with "<190>". I believe this number represents the facility(local7) and severity(info). The DNS request/response events do not have log_levels associated with and I assume this is the reason. I don't know if the syslog servers or Splunk are doing something wrong. 2. The "answer" field in the DNS response events is a quoted string that looks like this" "test1.f5lab.dhs.gov*. 5 IN A someIpAddress". But when displayed in Splunk, something has replaced the tabs with some kind of ASCll string. Splunk shows the answer field value pair as such: "test1.f5lab.dhs.gov. #0155#011IN#011A#someIpAddress. I'm unsure is this is happening on the syslog server or Splunk side.
Hello, On the Splunk search&reporting page "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where `process_certutil` ... See more...
Hello, On the Splunk search&reporting page "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where `process_certutil` (Processes.process=*urlcache* Processes.process=*split*) OR Processes .process=*urlcache* by Processes.dest Processes.user Processes.parent_process Processes.process_name Processes.process Processes.process_id Processes.original_file_name Processes.parent_process_id | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `certutil_download_with_urlcache_and_split_arguments_filter`"" Error in 'SearchParser': The search specifies a macro 'drop_dm_object_name' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information." I created the 'drop_dm_object_name' macro and I keep getting this error even though I have given read permission. Although I also created the "security_content_ctime" and "certutil_download_with_urlcache_and_split_arguments_filter" macros, I get the same error for these macros. Can you please help me to solve this problem?
Hi! i want to integrate OpenCTI intel feeds to splunk and i don't find any Add-on for this integration . OpenCTI provide a connector for this connection but what is the configuration that i need to... See more...
Hi! i want to integrate OpenCTI intel feeds to splunk and i don't find any Add-on for this integration . OpenCTI provide a connector for this connection but what is the configuration that i need to provide in splunk to receive the feeds . Can you Please suggest if there is any specific guide for how to do this with opencti ; Thank you