All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After we upgraded from 7.3 to 8.1.4, the UF can no longer read the json data. 07-21-2021 16:03:02.643 +0200 ERROR JsonLineBreaker - JSON StreamId:427620843244980635 had parsing error:Unexpected char... See more...
After we upgraded from 7.3 to 8.1.4, the UF can no longer read the json data. 07-21-2021 16:03:02.643 +0200 ERROR JsonLineBreaker - JSON StreamId:427620843244980635 had parsing error:Unexpected character while expecting ':': 'S' - data_source="/opt/uc4/srvavq/prod/share/ae/temp/toSplunk/OPT_REP/normal/ARIZON_CS_UC4_ActiveSchedJobs/P_AVQ_Aktive_Automic_Jobs_AVQPROD_0000000000000022.txt", data_host="xh515", data_sourcetype="P_AVQ_Aktive_Automic_Jobs_AVQPROD-too_small" props.conf [source::/opt/uc4/srvavq/*/share/ae/temp/toSplunk/ARIZON_CS/*/*/*.txt] INDEXED_EXTRACTIONS = json TIMESTAMP_FIELDS = Timestamp Data File like this: {"Timestamp" : 1626874253, "Anzahl" : "7", "userstamp" : "RM", "status" : "Processed", "taskname" : "task_secevt2_apply_sinstr.rm","env":"AVQPROD"} {"Timestamp" : 1626874221, "Anzahl" : "1", "userstamp" : "IMED_65454_54222", "status" : "Processed", "taskname" : "rm.set_resp","env":"AVQPROD"}
I'm having a bit of issue with my current logic. Ideally my lookup would contain three months of data, however when this search is executed I am only receiving the previous 15 mins of data. I presume... See more...
I'm having a bit of issue with my current logic. Ideally my lookup would contain three months of data, however when this search is executed I am only receiving the previous 15 mins of data. I presumed that the 'earliest' specification would only apply to the base search and not put requirements on the inputlookup, I was incorrect. In an ideal setting the base search runs every 15 minutes and pulls in recent events, lookup is appended to the results, stats recalculates latest events for public_ips, anything older than 3months is discarded, and the lookup is updated. Can anyone advise on my time settings so that the lookup appended results are not restricted to the 15min time frame in the base search?     index=firewall earliest=-15m | fields user src_host private_ip public_ip | inputlookup user_tracking.csv append=true | stats latest(_time) as latestTime by user src_host public_ip private_ip | where latestTime>relative_time(now(),"-3mon") | outputlookup user_tracking.csv      
We have a Splunk Alert set up with the following configuration: SETTINGS Alert type = Scheduled (Run on Cron Schedule) Time Range = Today Cron Expression = ***** Expires = 24 hours TRIGGER COND... See more...
We have a Splunk Alert set up with the following configuration: SETTINGS Alert type = Scheduled (Run on Cron Schedule) Time Range = Today Cron Expression = ***** Expires = 24 hours TRIGGER CONDITIONS Trigger alert when = Number of Results > 0 Trigger = Once Throttle = Ticked Suppress triggering for = 1 day TRIGGER ACTIONS When triggered - Add to Triggered Alerts - Send email The issue that we are experiencing is that if we have 3 events occur at different times throughout the day, we are only receiving an email for the first one.  Also, the following day (within the 24 hour period from the previous alert) we are not receiving any email notifications.  In all cases if I select the Splunk Alert and view the results I see all the events shown here, including those for which no email notification was received.. I believe the issue here has to do with the following settings: Trigger = Once Throttle = Ticked Suppress triggering for = 1 day From the Splunk documentation it is not clear whether all Splunk alerts would get suppressed after the first one, or just repeated Splunk Alerts for the same event.  I am assuming that it's the former as this would explain why we don't see any further email notifications until the 1 day / 24 hour period expires(?) I think changing the settings to the following: Trigger = For each result Throttle = Ticked Suppress triggering for = 1 day Will at least mean that we receive only one event in each email notification (for simultaneous alerts ... another issue that exists) but will not fix the suppressed email notifications.  Furthermore, removing the Throttle seems to just continuously alert on the same event. I want to keep the "Scheduled Alert" type (rather than "Realtime") due to the set-up that we have here and also I am unable to play around too much with the configuration in test as we do not have email notifications in this environment (only in our live environment). The goal, in case it's not yet clear from the above, is to receive a single email notification for each event.  Can you please advise / suggest the correct change that I should make to achieve this?
I have loaded a SSL Certificate on our development server (Splunk 8.1.4). I added the following to the server.conf file (based on the Splunk docs on what to add to the web.conf file): [sslConfig] ... See more...
I have loaded a SSL Certificate on our development server (Splunk 8.1.4). I added the following to the server.conf file (based on the Splunk docs on what to add to the web.conf file): [sslConfig] enableSplunkdSSL = 1 privKeyPath = $SPLUNK_HOME/etc/auth/mycerts/splunk.key serverCert = $SPLUNK_HOME/etc/auth/mycerts/splunk.pem   After restarting Splunk, I found a problem with the kvstors, and after investigating I found that mongod did not restart (running ./splunk _internal call /services/server/info |grep -i kvstore returned <s:key name="kvStoreStatus">failed</s:key>)   Running this search in Splunk:   index=_internal sourcetype=mongod   returns this error:   [main] cannot read certificate file: /opt/splunk/etc/auth/mycerts/splunk.key error:0906D06C:PEM routines:PEM_read_bio:no start line   I cannot determine why this error is being generated.
Hello, after upgrading our splunk development instance to 8.2.0, the page https://<splunk-ip>/en-Gb/manager/<app_name>/data/macros can not be found anymore in any app other than basic splunk search... See more...
Hello, after upgrading our splunk development instance to 8.2.0, the page https://<splunk-ip>/en-Gb/manager/<app_name>/data/macros can not be found anymore in any app other than basic splunk search app. It just shows "404 Not Found". Has anyone else experienced that issue? I am asking for a friend.
Hi All, I want to join two indexes and get a result.  Search Query -1 index=Microsoft | eval Event_Date=mvindex('eventDateTime',0) | eval UPN=mvindex('userStates{}.userPrincipalName',0) | eval ... See more...
Hi All, I want to join two indexes and get a result.  Search Query -1 index=Microsoft | eval Event_Date=mvindex('eventDateTime',0) | eval UPN=mvindex('userStates{}.userPrincipalName',0) | eval Logon_Location=mvindex('userStates{}.logonLocation',0) | eval Event_Title=mvindex('title',0) | eval Event_Severity=mvindex('severity',0) | eval AAD_Acct=mvindex('userStates{}.aadUserId',0) | eval LogonIP=mvindex('userStates{}.logonIp',0) | eval Investigate=+RiskyUsersBlade/userId".AAD_Acct | stats count by LogonIP Event_Date, Event_Title, Event_Severity UPN Logon_Location Investigate   Search Query - 2 index=o365 "Result of Query-1 LogonIP" earliest=-30d | stats dc(user) as "Distinct users"   If the Search Query-2 "Distinct users" results are greater than 20 then, I want to ignore the result.   @ITWhisperer @scelikok @soutamo @saravanan90 @thambisetty @bowesmana   @to4kawa @woodcock  @venkatasri   
Hi, i have a problem with a few queries. I have something actually like this:     index = nsw_prod_eximee ERROR | rex field=formInstanceNumber (?<pref>\w{3})\d{9} | rex field=applicationNumber ... See more...
Hi, i have a problem with a few queries. I have something actually like this:     index = nsw_prod_eximee ERROR | rex field=formInstanceNumber (?<pref>\w{3})\d{9} | rex field=applicationNumber (?<pref>\w{3})\d{9} | eval "Name" = case(pref=="USP", "mProtection", pref=="FGT", "mTravel", pref=="FGH", "HouseHold", pref=="FGS", "mMoto") | stats count as formInstanceNumber by "Name" | rename formInstanceNumber as "Errors"     And i have a table with a 4 values: But now i have a problem to count a column "Errors". I want to count all Errors.   2. The second problem i have, i can't do the timechart and i need help with it. I want to do timechart with that all values, but when i do that, there is no columns on timechart. How to get that query?   Thanks in advance.
Hi, I wanted to know if there is anything in particular to be considered if one intends to connect a Splunk instance which is on premise to an Oracle database on Cloud using Db Connect? Objective is... See more...
Hi, I wanted to know if there is anything in particular to be considered if one intends to connect a Splunk instance which is on premise to an Oracle database on Cloud using Db Connect? Objective is to pull data from a table in the database. Thanks in advance.
Hi all, so I've been trying to ingest cisco netflow logs into my splunk environment, and finally got the logs in with Splunk Stream. However, there's a field "src_content" which seems to be unable t... See more...
Hi all, so I've been trying to ingest cisco netflow logs into my splunk environment, and finally got the logs in with Splunk Stream. However, there's a field "src_content" which seems to be unable to parse or read by splunk, and its appearing as symbols. I'm suspecting itt is due to cisco netflow sending them via High-Speed Logging. Is there a template for splunk to decode these? It looks like this for eg. src_content:  "��`��f^P,d�N�q������l��z�so#(���
Hi, Currently I have few network devices sending logs via syslog to splunk and sourcetype is Cisco:ios  and present we are testing on only one device please guide me, what is the search string to ge... See more...
Hi, Currently I have few network devices sending logs via syslog to splunk and sourcetype is Cisco:ios  and present we are testing on only one device please guide me, what is the search string to get alert for availability and interface utilisation (for one device and more device).
Hi, In Sandpit --> I have a multicluster environment created for testing I have Windows Universal Forwarder --> From where I need to send sysmon logs to splunk sysmon is successfully installed -->... See more...
Hi, In Sandpit --> I have a multicluster environment created for testing I have Windows Universal Forwarder --> From where I need to send sysmon logs to splunk sysmon is successfully installed --> logging is enabled In windows UF --> I have created inputs.conf and outputs.conf under Program Files --> SplunkUniversal forwarder--> etc--> system --> local inputs.conf [monitor://%SystemRoot%\System32\Winevt\Logs\Microsoft-Windows-Sysmon%4Operational.evtx] index = main sourcetype = web   outputs.conf [tcpout] defaultGroup=sysmon_server [tcpout:sysmon_server] server=FQDN:5986   Restarted the splunk Added port 5986 into Heavy Forwarder of the clustered environment (9997 and 9998 ports were not getting connected to used 5986) Index and sourcetype mentioned above are already there in HF. --> Telnet is working --> Phonehome logs are there But sysmon logs are not getting ingested into splunk  
Need help with a Splunk query  to display % failures for each day during the time range selected % failures = A1/A2 *100 A1= Total number of events returned by the below query: index="abc"  "searc... See more...
Need help with a Splunk query  to display % failures for each day during the time range selected % failures = A1/A2 *100 A1= Total number of events returned by the below query: index="abc"  "searchTermForA1"   A2= Total number of events returned by the below query: index="xyz"  "searchTermForA2"   Expected Output: -------Date-------|--------A1-------------|------A2----------|-----% failures------- Separate rows in the result set for date 1-Jul, 2-Jul, 3-Jul, 4-Jul, 5-Jul, 6-Jul and 7-Jul, for time range selected as 1Jul to 7-Jul. Please help with the query. Thanks!
The table header's alignments seem completely random. Some are aligned to the left and others are aligned to the right. Is there a way to make them all the same? I have already aligned the cells... See more...
The table header's alignments seem completely random. Some are aligned to the left and others are aligned to the right. Is there a way to make them all the same? I have already aligned the cells, but i am not getting how to align the header. Can anyone please help me in this?
Dear Splunkers,   The result of my search is like : TXID,STATUS_A,STATUS_B,STATUS_C A,OK,OK,OK B,OK,KO,INPROGRESS C,OK,OK,KO D,OK,KO,KO E,KO   Transaction E has no STATUS_B nor STATUS_C fie... See more...
Dear Splunkers,   The result of my search is like : TXID,STATUS_A,STATUS_B,STATUS_C A,OK,OK,OK B,OK,KO,INPROGRESS C,OK,OK,KO D,OK,KO,KO E,KO   Transaction E has no STATUS_B nor STATUS_C fields What i'am trying to get is a count over all STATUS columns : STATUS_NAME,OK_COUNT,KO_COUNT,INPROGRESS_COUNT STATUS_A,4,1,0 STATUS_B,2,2,0 STATUS_C,1,2,1   Any hints are welcome. Thank you  
Dear Splunkers, can you please advise or direct my to right place on following question: we need to send notification to collaborators when any changes are done to some investigation. Is there possi... See more...
Dear Splunkers, can you please advise or direct my to right place on following question: we need to send notification to collaborators when any changes are done to some investigation. Is there possibility to create e.g. alert or there is a build-in functionality to notify users when someone updates investigation with their finding? Thanks in advance!
  Hello my friends I had a problem for 2 days I am not allowed to search in Splank i need reset key license tankful
Hi. We have some IBM DB2 systems running primarily on AIX and now our Security team has tasked us with collecting the audit log in Splunk. I tried just creating an input, monitor-stanza pointing it... See more...
Hi. We have some IBM DB2 systems running primarily on AIX and now our Security team has tasked us with collecting the audit log in Splunk. I tried just creating an input, monitor-stanza pointing it to the right directory, but nothing, I then changed to look at subfolders, and I got some data. I have looked at the DB2 documentation, and there is a very cumbersome process described (https://www.ibm.com/docs/en/db2/11.1?topic=facility-storage-analysis-audit-logs). Does anybody have some experience collecting DB2 audit logs and how did you do it (file monitor or DB-Connect)?   Kind regards las
Hello my friends I had a problem for 2 days I am not allowed to search in Splank Thankful
Hi all, I have a row of a pivot table as shown in the picture. However, is there a way to make it so that if the values are different when comparing the last four rows, it will highlight the row? I... See more...
Hi all, I have a row of a pivot table as shown in the picture. However, is there a way to make it so that if the values are different when comparing the last four rows, it will highlight the row? If possible an even better solution would be to highlight the cell that is different AND has the least frequency number. For instance, say in the last 4 columns when looking at one row, the values are shown as '4', '4', '5', '4' - only highlight the cell that has the 5 in it because yes it is different but it also shows up the least number of times.  Any help would be hugely appreciated. Please let me know if there are any issues with seeing the image.
Hi, I am using python SDK to search with this configuration: query_kwargs = {'earliest_time': earliest, 'latest_time': latest, 'results_preview': False, ... See more...
Hi, I am using python SDK to search with this configuration: query_kwargs = {'earliest_time': earliest, 'latest_time': latest, 'results_preview': False, 'search_mode': 'normal', 'status_buckets': 2 } job =splunk_client.jobs.create(query, **query_kwargs) As the Splunk documentation (https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtorunsearchespython/) I do the follow: while True: while not job.is_ready(): pass stats = { 'isDone': job['isDone'], 'doneProgress': job['doneProgress'], 'scanCount': job['scanCount'], 'eventCount': job['eventCount'], 'resultCount': job['resultCount'] } progress = float(stats['doneProgress'])*100 scanned = int(stats['scanCount']) matched = int(stats['eventCount']) result_count = int(stats['resultCount']) if verbose: status = ("\r%03.1f%% | %d scanned | %d matched | %d results" % (progress, scanned, matched, result_count)) sys.stdout.write(status) sys.stdout.flush() if job["isDone"] == "1": if verbose: sys.stdout.write("\n") break time.sleep(2)  Then once the job is finished I do this: offset = 0 max_event_count = 50000 total_results = [] first_50k_results = self.get_results(job, offset, max_event_count) total_results.extend(first_50k_results) while offset <= number_of_results: offset += max_event_count intermediate_result = self.get_results(job, offset, max_event_count) total_results.extend(intermediate_result) def get_results(self, job, offset, max_event_count): logger.info("collecting results,please wait . . ") results_list = [] kwargs_paginate = {"count": max_event_count, "offset": offset} for result in results.ResultsReader(job.results(**kwargs_paginate)): results_list.append(result) return results_list   The issue is that the number of events that the python search return is different from the number of events that the search in the Splunk console return. Can you please advise what I am doing wrong? Please note that I am using explicit index= in my search