All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We're trying to connect Deep Learning Toolkit to an on prem Kubernetes cluster, but it looks like it's failing on the initial connection.  We're using User Login so the first thing I need he... See more...
Hello, We're trying to connect Deep Learning Toolkit to an on prem Kubernetes cluster, but it looks like it's failing on the initial connection.  We're using User Login so the first thing I need help with is what certificate we're supposed to use. We've tried the one presented by the Kubernetes instance when we run "openssl s_client -connect server.com:6443" and one provided but the Kubernetes admin, but we still get the same error message: Exception: Could not connect to Kubernetes. HTTPSConnectionPool(host='server.com', port=6443): Max retries exceeded with url: /version/ (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:4183)'))) Nothing is getting blocked in the firewall.
What is wrong with the Match Condition in the following input that is not setting accountselectedToken to False? The value is :      accountselectedToken = True. It is failing the match condition an... See more...
What is wrong with the Match Condition in the following input that is not setting accountselectedToken to False? The value is :      accountselectedToken = True. It is failing the match condition and performing the second condition i.e. setting accountToken="*" <input type="multiselect" token="shardToken" searchWhenChanged="false">        <label>Shards</label>        <delimiter>,</delimiter>        <fieldForLabel>shardaccount</fieldForLabel>        <fieldForValue>shard</fieldForValue>   <search>      <query>| inputlookup ShardList.csv         | eval shardaccount=shard + " - " + account</query>        <earliest>@d</earliest>        <latest>now</latest>   </search> <change>       <condition match="$accountselectedToken$==True">          <set token="accountselectedToken">False</set>      </condition>     <condition>         <set token="accountToken">"*"</set>    </condition>
Hello, I have the below search     <base search>.. |stats values(Source) as Source count min(_time) as firstTime max(_time) as lastTime by dest,Service_Name, Service_ID, Ticket_Encryption_Ty... See more...
Hello, I have the below search     <base search>.. |stats values(Source) as Source count min(_time) as firstTime max(_time) as lastTime by dest,Service_Name, Service_ID, Ticket_Encryption_Type, Ticket_Options |convert timeformat="%F %H:%M:%S" ctime(values(lastTime)) |convert timeformat="%F %H:%M:%S" ctime(values(firstTime))     I got the above search from: https://docs.splunksecurityessentials.com/content-detail/kerberoasting_spn_request_with_rc4_encryption/ Yet Splunk is not coverting the firstTime and LastTime values into human readable format. It continues to display in unix time. Please advise. Results of Search: Note:  I also tried using eval before the stats command , but same thing   the firstTime and LastTime are still showing in unix format | eval _time = strftime(_time, "%F %H:%M:%S")  
  There is no time field in my log and I tried to get time from the source file name I tried the settings below myfile /var/log/data_01_20220507 /var/log/data_02_20220506 . .   transforms... See more...
  There is no time field in my log and I tried to get time from the source file name I tried the settings below myfile /var/log/data_01_20220507 /var/log/data_02_20220506 . .   transforms.conf [get_date] SOURCE_KEY=MetaData:Source REGEX=/var/log/data_01_\d+_(?P<date>\d+)\.LOG [set_time] INGEST_EVAL= _time = strptime(date,"%Y%m%d") + random() %1000   props.conf [mysourcetype] DATETIME_CONFIG = SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true disabled = false TRANSFORMS-time_set= get_date , set_time   However, it is timed in real time and the settings do not take effect. The universal forwarder sends data to the indexer, and I put this setting in the indexer What's the problem?      
Hi All, We got our Splunk deployment done from a 3rd party, which has completed the deployment and left already. Suddenly, Sophos central logs have stopped coming to splunk, for last 3 months. I hav... See more...
Hi All, We got our Splunk deployment done from a 3rd party, which has completed the deployment and left already. Suddenly, Sophos central logs have stopped coming to splunk, for last 3 months. I have checked the API keys at sophos, they are still valid. (The logs are integrated through sophos API).    I have the following questions, if somebody can help me with these 1- Where to check in splunk, the configuration done to read the sophos logs? I can't even find out where the splunk side settings are done to capture these logs. 2-  How to troubleshoot this issue?   Thanks.
2022-05-08 19:55:05 [machine-run-433303-hit-7496951-step-5389] [ATMX Logs Request/Extraction/Attach 2.5.2] [Business Process-Fraud Logs Card v2.5.2 (ATMXLogAttach)] [C806968] MachineTask [ERROR] Unsu... See more...
2022-05-08 19:55:05 [machine-run-433303-hit-7496951-step-5389] [ATMX Logs Request/Extraction/Attach 2.5.2] [Business Process-Fraud Logs Card v2.5.2 (ATMXLogAttach)] [C806968] MachineTask [ERROR] UnsupportedCommandException: unknown command: Cannot call non W3C standard command while in W3C mode 2022-05-08 19:55:03 [machine-run-333503-hit-7496951-step-5389] [ATMX Logs Request/Extraction/Attach 2.5.2] [Business Process-Fraud Logs Card v2.5.2 (ATMXLogAttach)] [C806968] UiRobotCapabilities [ERROR] JavascriptException: javascript error: Unexpected identifier (Session info: chrome=94.0.4606.71) 2022-05-08 19:35:37 [machine-run-43333-hit-7496952-step-5389] [ATMX Logs Request/Extraction/Attach 2.5.2] [Business Process-Fraud Logs Card v2.5.2 (ATMXLogAttach)] [C806966] MachineTask [ERROR] TimeoutException: Expected condition failed: waiting for element to be clickable: [unknown locator] (tried for 60 second(s) with 500 MILLISECONDS interval) I have above extract from our logs I would like to write a regex to get the text in red  as "ErrorType"
I am trying to construct an apparmor profile for my Splunk forwarder agent. I have installed the agent and it is currently sending logs to my Splunk Enterprise server. But when I try to generate appa... See more...
I am trying to construct an apparmor profile for my Splunk forwarder agent. I have installed the agent and it is currently sending logs to my Splunk Enterprise server. But when I try to generate apparmor profiles using "aa-genprof" command, I do not see any actions in the output.   How can I generate apparmor profile for my Splunk forwarder agent? I could not find any predefined profiles on the internet either.
Is Splunk 8.2.5 supported on Red Hat 7.9 ?
Hi, I have created a dashboard which shows the latest time of synching data between the two systems. Now I am interested to get the date color changed example: green(if sync time and current time di... See more...
Hi, I have created a dashboard which shows the latest time of synching data between the two systems. Now I am interested to get the date color changed example: green(if sync time and current time difference is less than 1 hour) and red (if sync time and current time difference is more than 2 hour). Is anything like that possible ?  In the format visualization as far as I am seeing it I can change the color only for the single value and not for the datetime format. Please anyone can assist. Below is the how my date format looks      
We have Splunk setup in our firm and our application logs writes TLS connections information that span across multiple lines and splunk treats every line as message. Example of Log: 2022-05-07 20... See more...
We have Splunk setup in our firm and our application logs writes TLS connections information that span across multiple lines and splunk treats every line as message. Example of Log: 2022-05-07 20:06:24.712 SSL accepted cipher=ECDHE-RSA-AES256-GCM-SHA384 2022-05-07 20:06:24.712 Connection protocol=TLSv1.2 2022-05-07 20:06:24.716 Dump of user cache: 2022-05-07 20:06:24.716 LDAP Cache: User 'user1' is a member of group(s): 2022-05-07 20:06:24.717 'xxxx-tibems-aaaa-prod-rdr' 2022-05-07 20:06:24.717 LDAP Cache: User 'auser2' is a member of group(s): 2022-05-07 20:06:24.717 'xxxx-tibems-yyyy-prod-wtr' 2022-05-07 20:06:24.717 LDAP Cache: User 'ad_cibgvaprod_rdr' is a member of group(s): 2022-05-07 20:06:24.717 'xxxx-tibems-yyyy-prod-rdr' 2022-05-07 20:06:24.717 LDAP Cache: User 'ad_vcsmonprod_adm' is a member of group(s): 2022-05-07 20:06:24.717 'xxxx-tibems-bbbb-prod' 2022-05-07 20:06:24.717 'xxxx-tibems-aaaa-prod-shutdown' 2022-05-07 20:06:24.717 [user1@server1.svr.us.example.net]: Connected, connection id=21879, client id=<none>, type: queue, UTC offset=2   Here line starts with "SSL accepted cipher=" and ends with "ser1@server1.svr.us.example.net]: Connected,"   I would like timecharts cipher (ECDHE-RSA-AES256-GCM-SHA384), user (user1), Server (server1.svr.us.example.net)  Stats like follows Date Hour       Cipher   User   Server Count 10-10-20 10:00 ECDHE-RSA-AES256-GCM-SHA384) user1 server1 200   Please let me know if there an elegant solution to this, Kannan
Hello, I have a multiline log file, but each file comes with a header that I want to discard and only use the part of the log that brings the important information, can someone help me. Here is the ... See more...
Hello, I have a multiline log file, but each file comes with a header that I want to discard and only use the part of the log that brings the important information, can someone help me. Here is the original log file: Audit file /oracle/SIC/AUDIT/SYS_OPERATIONS/ora_1695798.aud Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production With the Partitioning option JServer Release 9.2.0.8.0 - Production ORACLE_HOME = /oracle/SIC/920_64 System name: AIX Node name: duero Release: 3 Version: 5 Machine: 00CF214F4C00 Instance name: SIC Redo thread mounted by this instance: 1 Oracle process number: 37 Unix process pid: 1695798, image: oracle@duero (TNS V1-V3) Sat Mar 19 06:03:53 2022 ACTION : 'CONNECT' DATABASE USER: '/' PRIVILEGE : SYSOPER CLIENT USER: orasic CLIENT TERMINAL: STATUS: 0 Sat Mar 19 06:03:53 2022 ACTION : '/* BRARCHIVE */ CREATE PFILE = '/oracle/SIC/920_64/dbs/sap.ora' FROM SPFILE = '/oracle/SIC/920_64/dbs/spfileSIC.ora'' DATABASE USER: '/' PRIVILEGE : SYSOPER CLIENT USER: orasic CLIENT TERMINAL: STATUS: 0   But I only need these parts of the log: Sat Mar 19 06:03:53 2022 ACTION : 'CONNECT' DATABASE USER: '/' PRIVILEGE : SYSOPER CLIENT USER: orasic CLIENT TERMINAL: STATUS: 0 Sat Mar 19 06:03:53 2022 ACTION : '/* BRARCHIVE */ CREATE PFILE = '/oracle/SIC/920_64/dbs/sap.ora' FROM SPFILE = '/oracle/SIC/920_64/dbs/spfileSIC.ora'' DATABASE USER: '/' PRIVILEGE : SYSOPER CLIENT USER: orasic CLIENT TERMINAL: STATUS: 0
Dear All, I have a Search Head, Deployment Server, Monitoring Console, a Cluster Manager, an Indexer Cluster and two unclustered Indexers. On the Monitoring Console, I get alerts about the IOWaits ... See more...
Dear All, I have a Search Head, Deployment Server, Monitoring Console, a Cluster Manager, an Indexer Cluster and two unclustered Indexers. On the Monitoring Console, I get alerts about the IOWaits being high on the two unclustered indexers and this has been happening only since we upgraded to 8.2.5. There is no evidence of any issues, other than this alert in SplunkWeb and I want to disable it. I am using the following KB article: https://docs.splunk.com/Documentation/Splunk/8.2.5/Admin/Healthconf On the Monitoring Console server, I have put the following into the etc\apps\search\local\health.conf file: [feature:iowait] alert:sum_top3_cpu_percs__max_last_3m.disabled = 1 However, I am still getting the appearing in SplunkWeb on the Monitoring Console server. Why is this? Am I configuring the health.conf in the wrong server or the wrong folder, or what? When I run a cmd btool health list, I see the configuration there, but Splunk is not doing as it is being told! If I am doing the wrong thing, even, can someone point me to some documentation that explains what I should be doing? Thanks in advance! 
hi how exactly cluster commad work? I have lots of unstructured data that has different key and value, how splunk detect and cluster these lines? What happen behind scene? https://docs.splunk.com/... See more...
hi how exactly cluster commad work? I have lots of unstructured data that has different key and value, how splunk detect and cluster these lines? What happen behind scene? https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Cluster   Any idea? Thanks 
Hi there,  I am trying to enable drilldown on a dashboard view to use a custom search(see below search string snippet). Although the search with the aforementioned string works fine on its own but i... See more...
Hi there,  I am trying to enable drilldown on a dashboard view to use a custom search(see below search string snippet). Although the search with the aforementioned string works fine on its own but its complaining when I use it within the drilldown custom search saying "Unbalanced quotes". Any idea why ? Thanks.      
I have two slightly different forms of a tab delimited log.  Both are in the same index and have the same source type.  One has a leading number, and the other does not.   How can I extract a single ... See more...
I have two slightly different forms of a tab delimited log.  Both are in the same index and have the same source type.  One has a leading number, and the other does not.   How can I extract a single field name that looks at column 10 if there is a leading number and column 9 if not.    Log with a leading number 1650556427.891  98.53.183.43  0.001  200  1560  GET  https ... DEN50-C1 PVnGZrUUkw0RcRcqs4 ... Log without a leading number 98.53.183.43  0.001  200  1560  GET  https ...  LAX50-C4 ht6GZrUdg5tRcRcq34 ... I can't just look for field 10 because it will only work in one type of log and return the wrong information in the other.     I made a RegEx query that picked the field position based of whether there was a leading number or not.  The problem is that this does not work because the two subpattern names are the same.    Splunk Error:  Regex: two named subpatterns have the same name (PCRE2_DUPNAMES not set). (?(?=^\d+\.\d+\s)^(?:[^\t\n]*\t){10}(?P<fieldName>[^\t]+)|^(?:[^\t\n]*\t){9}(?P<fieldName>[^\t]+)) If I change the 2nd field name it saves, but only the first name is shown as a fieldName and the entry without a leading number is not included in the fieldName. Is there a RegEx that can do this, or some another way without changing the log?  I think if I was able to split the two log types into different source types I could do it easily.  I don't think I can do that though.  The logs come from AWS cloud servers.  The same with removing the leading number.  Thanks for your help.        
I've recently onboarded data from Gsuite to Splunk. I'm currently trying to create a few queries, but I'm having problem creating queries do to the JSON format.  I'm currently just trying to create a... See more...
I've recently onboarded data from Gsuite to Splunk. I'm currently trying to create a few queries, but I'm having problem creating queries do to the JSON format.  I'm currently just trying to create a table with owner name, file name, time, etc. I've tried using the spath command and json formatting, but I can't seem to get the data in a table. Here's an example query        index="gsuite" sourcetype="gws:reports:drive" | spath events{}.parameters{}.value.doc_title       but the field isn't created.  Here's the data in the events{}.parameters{}.value field   Here's a sample of the data.       { "actor": { "profileId": "Sample Text" }, "etag": "\"Sample Text\"", "events": [{ "name": "sheets_import_range", "parameters": [{ "boolValue": true, "name": "primary_event" }, { "name": "billable" }, { "name": "recipient_doc", "value": "123456789" }, { "name": "doc_id", "value": "123456789" }, { "name": "doc_type", "value": "spreadsheet" }, { "name": "is_encrypted" }, { "name": "doc_title", "value": "sampletext.xls" }, { "name": "visibility", "value": "shared_externally" }, { "name": "actor_is_collaborator_account" }, { "name": "owner", "value": "johndoe@gmail.com" }, { "name": "owner_is_shared_drive" }, { "name": "owner_is_team_drive" }], "type": "access" }], "id": { "applicationName": "drive", "customerId": "123456789", "time": "2022-05-06T20:55:00.285Z", "uniqueQualifier": "-123456789" }, "kind": "admin#reports#activity" }       I would like the data to look like this      owner doc_title doc_type visibility johndoe@gmail.com. sampletext.xls spreadsheet shared_externally      
I am trying to create a Splunk Alert which -- well, the details will take too long to explain The issue is that I'm generating a stats list where some of the results have a single value while ot... See more...
I am trying to create a Splunk Alert which -- well, the details will take too long to explain The issue is that I'm generating a stats list where some of the results have a single value while others have multiple, e.g. PrimaryField SecondaryField resultToKeep result1 result2 resultToToss result1   How do I filter-out the 'resultToToss' based on the fact there's only 1 'SecondaryField' result for it?
Hello Splunk Community, We are getting ready to migrate our indexers to new hardware. We would like to take the approach of adding the new indexers into our current cluster after which we'll remov... See more...
Hello Splunk Community, We are getting ready to migrate our indexers to new hardware. We would like to take the approach of adding the new indexers into our current cluster after which we'll remove the indexers on the old hardware from the cluster. The only problem is we may be putting RHEL 8 on these new indexers and the old ones have RHEL 7. I know the docs say that the indexers must be on the same OS and OS version, but wondering if we still might be able to mix these two for a short time while we transition from the old to the new hardware. Any insight is appreciated. Thanks!
I have requirement  after  submit I need to hide and show row's panel on the condition of dropdown. when day is selected then show panel 1 and when hour is selected then show panel 2. I have queries ... See more...
I have requirement  after  submit I need to hide and show row's panel on the condition of dropdown. when day is selected then show panel 1 and when hour is selected then show panel 2. I have queries in panel so that I don't want execute it also by adding the token condition.    <fieldset submitButton="true" autoRun="false">               <input type="dropdown" token="timespan">                       <label>Time Span</label>                      <choice value="1h">Hour</choice>                       <choice value="1d">Day</choice>                       <initialValue>1d</initialValue>                      <default>1d</default>               </input> </fieldset> <row depends=???> ----- --panel 1 <row depends=???> --------panel 2   How to set and unset token after submit and what should be depends condition in row/panel?
Hi everyone, Would anyone know a way to make it possible for my Y-Axis highest value to change depending on the $click.value2$. For context below are some screenshots of the dashboard that I am w... See more...
Hi everyone, Would anyone know a way to make it possible for my Y-Axis highest value to change depending on the $click.value2$. For context below are some screenshots of the dashboard that I am working on for my team. What it aims to do are the following: Display three (3) different reports by setting the following: Select a report to view via "Select to View" then click "Submit"  The default date would be used in the "From MM/DD/YYYY" and "To MM/DD/YYYY" text input. User will have to edit the date in them to adjust the time period. Proceed to the panel on the left where they must click one of the values($click.value2$) which will then generate the Area Chart on the right panel. My current dilemma is that each specific report has its own "Max Y-Axis Value". Is there a way for me to set that without using "Chart Overlay" so that it would be easier for our business unit to understand it. Preferably using XML or any default Splunk features (version 8.1.3). Thank you.