All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all, I need to ingest audit logs from Atlassian Jira, Confluence, and Bitbucket. Most of that is pretty straight forward, but I am finding the Jira audit logs (atlassian-jira.log and atlassian... See more...
Hello all, I need to ingest audit logs from Atlassian Jira, Confluence, and Bitbucket. Most of that is pretty straight forward, but I am finding the Jira audit logs (atlassian-jira.log and atlassian-servicedesk.log) seem very random (sometimes fields are missing, occasionally a field will be long with multiple words and spaces with no fields or brackets). Are there any TAs that can assist with parsing for on-prem Jira audit logs (atlassian-jira.log and atlassian-servicedesk.log)? Or any Splunk guidance I can give the Jira admin to help make logging better. So far they have told me they don't want to update their log4j properties files because it will just get overridden the next time they upgrade. 
hello , i want to detect foreign ip at first step, then search in traffic for connections between foreign ip and other local ips.   | tstats `security_content_summariesonly` values(All_Traffi... See more...
hello , i want to detect foreign ip at first step, then search in traffic for connections between foreign ip and other local ips.   | tstats `security_content_summariesonly` values(All_Traffic.src_ip) AS src values(All_Traffic.dest_ip) AS dest values(All_Traffic.dest_ip_Country) AS dest_country values(All_Traffic.src_ip_Country) AS src_country from datamodel=Network_Traffic by _time | eval attacker=if(src_country="","$src$","$dest$") | search [ | tstats count from datamodel=Network_Traffic WHERE (All_Traffic.src_ip=attacker OR All_Traffic.dest_ip=attacker) by _time ]  
I have been fighting with a regex in my props.conf (Regex-working-on-search-but-not-props-transforms ) and after a lot of testing, I came to the conclusion that my regex is fine. And my props.conf is... See more...
I have been fighting with a regex in my props.conf (Regex-working-on-search-but-not-props-transforms ) and after a lot of testing, I came to the conclusion that my regex is fine. And my props.conf is fine. I had originally failed to see SOME were processing fine, and assumed all were bad. So, I checked to see if there was a cutoff where some were good vs bad, and found one, inside my len(_raw) field. Turns out, my props.conf seems to stop working somewhere after 4096 character count.  So, I made some test data and pulled it into splunk Bash:   #!/bin/bash x="a" count=1 while [ $count -le 5000 ] do echo "$x regex" ((count+=1)) x+="a" done   This just makes a bunch of lines, incrementing with another char each time.   a regex aa regex aaa regex aaaa regex aaaaa regex aaaaaa regex aaaaaaa regex   Then I pull this into Splunk, 1 line per log. Static host field = "x" Props:   [host::x] SHOULD_LINEMERGE = false TRANSFORMS-again = regexExtract   Transforms:   [regexExtract] REGEX = .*(regex) DEST_KEY= _raw FORMAT = $1 WRITE_META = true   My search:      index="delete" sourcetype="output.log" | eval length=len(_raw) | search length<4100 | table length | sort - length     Assuming my KV limits are ok, because this btool snippit:     /opt/splunk/etc/system/default/limits.conf [kv] /opt/splunk/etc/system/default/limits.conf avg_extractor_time = 500 /opt/splunk/etc/system/default/limits.conf indexed_kv_limit = 200 /opt/splunk/etc/system/default/limits.conf limit = 100 /opt/splunk/etc/system/default/limits.conf max_extractor_time = 1000 /opt/splunk/etc/system/default/limits.conf max_mem_usage_mb = 200 /opt/splunk/etc/system/default/limits.conf maxchars = 10240 /opt/splunk/etc/system/default/limits.conf maxcols = 512     ------------------------------------------ Now that we have all the setup, I search my index and see that every field up thought 4097 chars long will process the index time regex. After that, the search time regex works fine, but index time is no longer functional.   How do i get it to continue processing beyond that ~4100 chars?       
Hi,   In the above figure, I save the test results using a save ID and then I get a list of ID's like the one below. But when I click on 'save new test', it doesn't work. I have the... See more...
Hi,   In the above figure, I save the test results using a save ID and then I get a list of ID's like the one below. But when I click on 'save new test', it doesn't work. I have the same code on the test site as well where it works as usual. A sample xml code is provided as well. <panel depends="$host_token$"> <title>Save Test</title> <!--When button is pressed the the token execute_save_test is set up with a outputlookup at the end, this is used later on to push the data back into the lookup in order--> <input type="text" token="save_test_name_token" searchWhenChanged="true"> <label>Test_Name</label> </input> <input type="text" token="save_test_comments" searchWhenChanged="true"> <label>Comments</label> <default>-</default> </input> <input type="dropdown" token="save_test_status" searchWhenChanged="true"> <label>Mark test as gold, bad test or ordinary</label> <choice value="ORDINARY">ORDINARY</choice> <choice value="GOLD">GOLD</choice> <choice value="BAD TEST">BAD TEST</choice> <default>ORDINARY</default> </input> <html> <button class="btn" data-token-json="{&quot;execute_save_test&quot;:&quot;| eventstats max(ID) as max_ID | eval ID = if(isnull(ID),max_ID + 1,ID) | fields - max_ID | dedup ID | outputlookup Saved_Tests.csv&quot;}">Save new test</button> </html> <html depends="$saved_test_id$"> <button class="btn" data-token-json="{&quot;execute_save_test&quot;:&quot;| eval ID = if(isnull(ID),$saved_test_id$,ID) | dedup ID | outputlookup Saved_Tests.csv&quot;}">Update current test (ID: $saved_test_id$)</button> </html> </panel>   Could someone please advise what has gone wrong?   Regards, Pravin    
Hello all, I have a set of data as below. In the column is value of each id according to the time _time id = 12345 id = 12347 id = 12349 01-févr 10 20 5 02-févr 12 45 9 03-févr... See more...
Hello all, I have a set of data as below. In the column is value of each id according to the time _time id = 12345 id = 12347 id = 12349 01-févr 10 20 5 02-févr 12 45 9 03-févr 15 53 12 04-févr 17     05-févr       06-févr   120   07-févr   140 56 08-févr 57 150 60 09-févr 60 155 75 10-févr 70 175 90   I would like to  calculate delta then fill the null delta. I have this piece of  code, until here I can calculate the delta for each id, I am finding the solution for the filling null delta: index="index" [|inputlookup test.csv | search id=1234** |timechart latest(value) as valLast span=1d by id |untable _time id valLast |streamstats current=false window=1 global=false first(valLast) as p_valLast by id | eval delta=valLast-p_valLast | xyseries _time id delta |streamstats count(eval(if(isnull(delta),0,null()))) as count by id Result: columns display delta values according to each id in a time _time id = 1 id = 2 id = 3 01-févr       02-févr 2 25 4 03-févr 3 8 3 04-févr 2     05-févr       06-févr   120   07-févr   20 56 08-févr 57 10 4 09-févr 3 5 15 10-févr 10 20 15   Thanks in advanced!
Hi there,  This is in regards to this Splunk Add-On:   "Splunk Add-on for Microsoft Cloud Services" https://splunkbase.splunk.com/app/3110/#/overview   I would like to find out what is the A... See more...
Hi there,  This is in regards to this Splunk Add-On:   "Splunk Add-on for Microsoft Cloud Services" https://splunkbase.splunk.com/app/3110/#/overview   I would like to find out what is the API Permission Required in Azure Active Directory for this Add-On? I have been searching for it but wasn't available except "Azure Insight" but the link does provided does not help either. In addition, "Azure Insight" wasn't part of the API Permission in Azure Active Directory.     
Hi Team, what is the volume of events received in bytes per day  from CyberArk EPM SaaS to SPLUNK?  
Hi,   I have a very basic timechart from the below search. Just counts the number of events=40 (event ID). The issue is we had a logging problem and received no events for a specific time period be... See more...
Hi,   I have a very basic timechart from the below search. Just counts the number of events=40 (event ID). The issue is we had a logging problem and received no events for a specific time period before we resolved the issue. This means the timechart has a drop to zero then back up to usual levels. Can I remove this from the timechart somehow?     Index=main event_type=40 | timechart count(src_ip) by sensor    
Hi All,  I'm currently trying to configure a alert to trigger when 2 events are NOT present in last 15min.  In short if we have only Event1 but not Event2 then a alert should be triggered, if both... See more...
Hi All,  I'm currently trying to configure a alert to trigger when 2 events are NOT present in last 15min.  In short if we have only Event1 but not Event2 then a alert should be triggered, if both events are present in last 15min then no alerts should be triggered.  Use case, the alert is being configured to alert us when a VPN tunnel interface goes down and stays down for more than 15min, generally these VPN connections to terminate briefly but comes back up after a few seconds, hence we would like only alert if Event1 (down) took place in last 15min without Event2 (up) taking place.  Event1 - Search query index=firewall 10.10.10.10 Firewall_Name_XYZ=TEST123 AND "Lost Service" Event2 - Search query  index=firewall 10.10.10.10 Firewall_Name_XYZ=TEST123 AND (inbound "LAN-to-LAN" "created") Search Query to show both events  index=firewall 10.10.10.10 Firewall_Name_XYZ=TEST123 AND ("Lost Service" OR (inbound "LAN-to-LAN" "created")) Any assistance will be greatly appreciated  
Hello Splunk Community! I have a query where im extracting data from different logs and displaying them on the same row in a statistical table. here is my code:   index=main source=/opt/server/*/... See more...
Hello Splunk Community! I have a query where im extracting data from different logs and displaying them on the same row in a statistical table. here is my code:   index=main source=/opt/server/*/*userauth* | rex field=_raw "\]\s\S+\s-\s\[(?<caller>\S+)\]\s\S+->(?<function>\S+)\s(?<logs>.+)\:\s\[(?<value>\S+)\]" | where isnotnull(caller) | chart values(value) over caller by logs useother=f   i used the chart command to turn the logs  into column headers with their respective values The result : caller logB logC logD caller_id valueB valueC valueD however, when i narrow down the search by adding "validateSB" which is contained in most log entries,   index=main source=/opt/server/*/*userauth* validateSB | rex field=_raw "\]\s\S+\s-\s\[(?<caller>\S+)\]\s\S+->(?<function>\S+)\s(?<logs>.+)\:\s\[(?<value>\S+)\]" | where isnotnull(caller) | chart values(value) over caller by logs useother=f    The column A now appears as a column header with its respective value. caller logA logB logC logD caller_id valueA valueB valueC valueD When i also search a specific caller, the column A appears with its respective value too. Anybody know why this might be the case? Thanks in advance!
Hello, We're trying to connect Deep Learning Toolkit to an on prem Kubernetes cluster, but it looks like it's failing on the initial connection.  We're using User Login so the first thing I need he... See more...
Hello, We're trying to connect Deep Learning Toolkit to an on prem Kubernetes cluster, but it looks like it's failing on the initial connection.  We're using User Login so the first thing I need help with is what certificate we're supposed to use. We've tried the one presented by the Kubernetes instance when we run "openssl s_client -connect server.com:6443" and one provided but the Kubernetes admin, but we still get the same error message: Exception: Could not connect to Kubernetes. HTTPSConnectionPool(host='server.com', port=6443): Max retries exceeded with url: /version/ (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:4183)'))) Nothing is getting blocked in the firewall.
What is wrong with the Match Condition in the following input that is not setting accountselectedToken to False? The value is :      accountselectedToken = True. It is failing the match condition an... See more...
What is wrong with the Match Condition in the following input that is not setting accountselectedToken to False? The value is :      accountselectedToken = True. It is failing the match condition and performing the second condition i.e. setting accountToken="*" <input type="multiselect" token="shardToken" searchWhenChanged="false">        <label>Shards</label>        <delimiter>,</delimiter>        <fieldForLabel>shardaccount</fieldForLabel>        <fieldForValue>shard</fieldForValue>   <search>      <query>| inputlookup ShardList.csv         | eval shardaccount=shard + " - " + account</query>        <earliest>@d</earliest>        <latest>now</latest>   </search> <change>       <condition match="$accountselectedToken$==True">          <set token="accountselectedToken">False</set>      </condition>     <condition>         <set token="accountToken">"*"</set>    </condition>
Hello, I have the below search     <base search>.. |stats values(Source) as Source count min(_time) as firstTime max(_time) as lastTime by dest,Service_Name, Service_ID, Ticket_Encryption_Ty... See more...
Hello, I have the below search     <base search>.. |stats values(Source) as Source count min(_time) as firstTime max(_time) as lastTime by dest,Service_Name, Service_ID, Ticket_Encryption_Type, Ticket_Options |convert timeformat="%F %H:%M:%S" ctime(values(lastTime)) |convert timeformat="%F %H:%M:%S" ctime(values(firstTime))     I got the above search from: https://docs.splunksecurityessentials.com/content-detail/kerberoasting_spn_request_with_rc4_encryption/ Yet Splunk is not coverting the firstTime and LastTime values into human readable format. It continues to display in unix time. Please advise. Results of Search: Note:  I also tried using eval before the stats command , but same thing   the firstTime and LastTime are still showing in unix format | eval _time = strftime(_time, "%F %H:%M:%S")  
  There is no time field in my log and I tried to get time from the source file name I tried the settings below myfile /var/log/data_01_20220507 /var/log/data_02_20220506 . .   transforms... See more...
  There is no time field in my log and I tried to get time from the source file name I tried the settings below myfile /var/log/data_01_20220507 /var/log/data_02_20220506 . .   transforms.conf [get_date] SOURCE_KEY=MetaData:Source REGEX=/var/log/data_01_\d+_(?P<date>\d+)\.LOG [set_time] INGEST_EVAL= _time = strptime(date,"%Y%m%d") + random() %1000   props.conf [mysourcetype] DATETIME_CONFIG = SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true disabled = false TRANSFORMS-time_set= get_date , set_time   However, it is timed in real time and the settings do not take effect. The universal forwarder sends data to the indexer, and I put this setting in the indexer What's the problem?      
Hi All, We got our Splunk deployment done from a 3rd party, which has completed the deployment and left already. Suddenly, Sophos central logs have stopped coming to splunk, for last 3 months. I hav... See more...
Hi All, We got our Splunk deployment done from a 3rd party, which has completed the deployment and left already. Suddenly, Sophos central logs have stopped coming to splunk, for last 3 months. I have checked the API keys at sophos, they are still valid. (The logs are integrated through sophos API).    I have the following questions, if somebody can help me with these 1- Where to check in splunk, the configuration done to read the sophos logs? I can't even find out where the splunk side settings are done to capture these logs. 2-  How to troubleshoot this issue?   Thanks.
2022-05-08 19:55:05 [machine-run-433303-hit-7496951-step-5389] [ATMX Logs Request/Extraction/Attach 2.5.2] [Business Process-Fraud Logs Card v2.5.2 (ATMXLogAttach)] [C806968] MachineTask [ERROR] Unsu... See more...
2022-05-08 19:55:05 [machine-run-433303-hit-7496951-step-5389] [ATMX Logs Request/Extraction/Attach 2.5.2] [Business Process-Fraud Logs Card v2.5.2 (ATMXLogAttach)] [C806968] MachineTask [ERROR] UnsupportedCommandException: unknown command: Cannot call non W3C standard command while in W3C mode 2022-05-08 19:55:03 [machine-run-333503-hit-7496951-step-5389] [ATMX Logs Request/Extraction/Attach 2.5.2] [Business Process-Fraud Logs Card v2.5.2 (ATMXLogAttach)] [C806968] UiRobotCapabilities [ERROR] JavascriptException: javascript error: Unexpected identifier (Session info: chrome=94.0.4606.71) 2022-05-08 19:35:37 [machine-run-43333-hit-7496952-step-5389] [ATMX Logs Request/Extraction/Attach 2.5.2] [Business Process-Fraud Logs Card v2.5.2 (ATMXLogAttach)] [C806966] MachineTask [ERROR] TimeoutException: Expected condition failed: waiting for element to be clickable: [unknown locator] (tried for 60 second(s) with 500 MILLISECONDS interval) I have above extract from our logs I would like to write a regex to get the text in red  as "ErrorType"
I am trying to construct an apparmor profile for my Splunk forwarder agent. I have installed the agent and it is currently sending logs to my Splunk Enterprise server. But when I try to generate appa... See more...
I am trying to construct an apparmor profile for my Splunk forwarder agent. I have installed the agent and it is currently sending logs to my Splunk Enterprise server. But when I try to generate apparmor profiles using "aa-genprof" command, I do not see any actions in the output.   How can I generate apparmor profile for my Splunk forwarder agent? I could not find any predefined profiles on the internet either.
Is Splunk 8.2.5 supported on Red Hat 7.9 ?
Hi, I have created a dashboard which shows the latest time of synching data between the two systems. Now I am interested to get the date color changed example: green(if sync time and current time di... See more...
Hi, I have created a dashboard which shows the latest time of synching data between the two systems. Now I am interested to get the date color changed example: green(if sync time and current time difference is less than 1 hour) and red (if sync time and current time difference is more than 2 hour). Is anything like that possible ?  In the format visualization as far as I am seeing it I can change the color only for the single value and not for the datetime format. Please anyone can assist. Below is the how my date format looks      
We have Splunk setup in our firm and our application logs writes TLS connections information that span across multiple lines and splunk treats every line as message. Example of Log: 2022-05-07 20... See more...
We have Splunk setup in our firm and our application logs writes TLS connections information that span across multiple lines and splunk treats every line as message. Example of Log: 2022-05-07 20:06:24.712 SSL accepted cipher=ECDHE-RSA-AES256-GCM-SHA384 2022-05-07 20:06:24.712 Connection protocol=TLSv1.2 2022-05-07 20:06:24.716 Dump of user cache: 2022-05-07 20:06:24.716 LDAP Cache: User 'user1' is a member of group(s): 2022-05-07 20:06:24.717 'xxxx-tibems-aaaa-prod-rdr' 2022-05-07 20:06:24.717 LDAP Cache: User 'auser2' is a member of group(s): 2022-05-07 20:06:24.717 'xxxx-tibems-yyyy-prod-wtr' 2022-05-07 20:06:24.717 LDAP Cache: User 'ad_cibgvaprod_rdr' is a member of group(s): 2022-05-07 20:06:24.717 'xxxx-tibems-yyyy-prod-rdr' 2022-05-07 20:06:24.717 LDAP Cache: User 'ad_vcsmonprod_adm' is a member of group(s): 2022-05-07 20:06:24.717 'xxxx-tibems-bbbb-prod' 2022-05-07 20:06:24.717 'xxxx-tibems-aaaa-prod-shutdown' 2022-05-07 20:06:24.717 [user1@server1.svr.us.example.net]: Connected, connection id=21879, client id=<none>, type: queue, UTC offset=2   Here line starts with "SSL accepted cipher=" and ends with "ser1@server1.svr.us.example.net]: Connected,"   I would like timecharts cipher (ECDHE-RSA-AES256-GCM-SHA384), user (user1), Server (server1.svr.us.example.net)  Stats like follows Date Hour       Cipher   User   Server Count 10-10-20 10:00 ECDHE-RSA-AES256-GCM-SHA384) user1 server1 200   Please let me know if there an elegant solution to this, Kannan