Hi folks.. I have an issue where I can't get an event to break right. The event looks like this ************************************
2024.09.03.141001
************************************
s...
See more...
Hi folks.. I have an issue where I can't get an event to break right. The event looks like this ************************************
2024.09.03.141001
************************************
sqlplus -S -L swiftfilter/_REMOVED_@PPP @"long_lock_alert.sql"
TAG COUNT(*)
--------------- ----------
PPP_locks_count 0
TAG COUNT(*)
--------------- ----------
PPP_locks_count 0
SUCCESS
End Time: 2024.09.03.141006 Props looks like this: [nk_pp_tasks]
SHOULD_LINEMERGE=false
LINE_BREAKER=End Time([^\*]+)
NO_BINARY_CHECK=true
TIME_FORMAT=%Y.%m.%d.%H%M%S
TIME_PREFIX=^.+[\r\n]\s
BREAK_ONLY_BEFORE_DATE = false Outcome is this: When the logfile is imported through 'Add Data' everything looks fine and the event has not been broken up in 3. Any idees on how to make Splunk not break up the event ?
Hi @MCW 1. How many events are returned by your <SPL search>? 2. Can you share the output of your <SPL search> that you used (e.g. as CSV)? I'd like to replicate your situation on my server. 3....
See more...
Hi @MCW 1. How many events are returned by your <SPL search>? 2. Can you share the output of your <SPL search> that you used (e.g. as CSV)? I'd like to replicate your situation on my server. 3. Do you have access to the server where Splunk is running on? If yes, can you provide the output of the following two commands? ./splunk show config mlspl | grep max_inputs ./splunk btool mlspl list --debug | grep max_inputs Without knowing any more details, my guess is that your <SPL search> returned more events than you allow in your max_inputs setting (e.g. if your search returns 200'000 events and your max_inputs=100'000). Consequently, the number of events are downsampled by DSDL/MLTK. The resulting my_test_data.csv with 1153 lines that you see within the jupyter notebook environment is exactly this sample. Regards, Gabriel
So everything is the same except the metrics are different, the data is different and generally we don't know what and why "doesn't work", right? But seriously. The data is important here as well as...
See more...
So everything is the same except the metrics are different, the data is different and generally we don't know what and why "doesn't work", right? But seriously. The data is important here as well as what your transform looks like. Look at the Masa diagrams https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 I haven't worked with metrics much but I'd say metric schema is invoked after transforms so you need to filter your data by raw event contents.
We're experiencing the same issues. We're running version 9.3.0 with a separate indexer, search head, and license server. All of our servers are affected by the memory leak. This started after we u...
See more...
We're experiencing the same issues. We're running version 9.3.0 with a separate indexer, search head, and license server. All of our servers are affected by the memory leak. This started after we upgraded from version 9.1.3. We were hoping that subsequent updates would fix it. Is there any way we can assist to expedite your case?
1. Please don't post multiple thread about extracting fields from the same set of data. 2. Try to be more descriptive in naming the topic of the thread. "Regular expression" doesn't tell much about ...
See more...
1. Please don't post multiple thread about extracting fields from the same set of data. 2. Try to be more descriptive in naming the topic of the thread. "Regular expression" doesn't tell much about the thread contents.
HI , I want to extract purple part. But Severity can be Critical as well . [Time:29-08@17:52:05.880] [60569130] 17:52:28.604 10.82.10.245 local0.notice [S=2952486] [BID=d57afa:30] RAISE-ALARM:ac...
See more...
HI , I want to extract purple part. But Severity can be Critical as well . [Time:29-08@17:52:05.880] [60569130] 17:52:28.604 10.82.10.245 local0.notice [S=2952486] [BID=d57afa:30] RAISE-ALARM:acBoardEthernetLinkAlarm: [KOREASBC1] Ethernet link alarm. LAN port number 3 is down.; Severity:minor; Source:Board#1/EthernetLink#3; Unique ID:206; Additional Info1:GigabitEthernet 4/3; Additional Info2:SEL-SBC01; [Time:29-08@17:52:28.604] [60569131] 17:52:28.605 10.82.10.245 local0.warning [S=2952487] [BID=d57afa:30] RAISE-ALARM:acEthernetGroupAlarm: [KOREASBC1] Ethernet Group alarm. Ethernet Group 2 is Down.; Severity:major; Source:Board#1/EthernetGroup#2; Unique ID:207; Additional Info1:; [Time:29-08@17:52:28.605] [60569132] 17:52:28.721 10.82.10.245 local0.notice [S=2952488] [BID=d57afa:30] SYS_HA: Redundant unit physical network interface error fixed. [Code:0x46000] [Time:29-08@17:52:28.721] [60569133]
hi i want to extract purple part. [Time:29-08@17:53:05.654] [60569222] 17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ...
See more...
hi i want to extract purple part. [Time:29-08@17:53:05.654] [60569222] 17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ClientTransaction(#471) failed sending message with CSeq 1 OPTIONS CallID 20478380282982024175249@1.215.255.202, the cause is Transport Error [Time:29-08@17:53:05.654] [60569223] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655] [60569224] 17:53:05.656 10.82.10.245 local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] [60569225] 17:53:05.657 10.82.10.245 local3.notice [S=2952581] [SID=d57afa:30:1773434] (N 71121560) AcSIPDialog(#28): Handling DIALOG_DISCONNECT_REQ in state DialogInitiated
Hi guys, i'm in need to delete some records from the deployment server although, when i do it via the forwarder management i get the "This functionality has been deprecated" alert. Is there any other...
See more...
Hi guys, i'm in need to delete some records from the deployment server although, when i do it via the forwarder management i get the "This functionality has been deprecated" alert. Is there any other way i can proceed?
I'm referring to the original post. @markhvesta said that his transforms are not working for metrics data. I have the same issue (metric names are of course different). So, configuration is already...
See more...
I'm referring to the original post. @markhvesta said that his transforms are not working for metrics data. I have the same issue (metric names are of course different). So, configuration is already here, I don't have to paste my configuration. Regex is working (tested on regex101). And the main question in this post is "Any ideas if there is a special way to do this [for metrics data]?"
What do you mean by "configure rules for BOTS dataset"? The BOTS dataset comes as preindexed buckets which can cause issues. It's pre-indexed which means it's already indexed "into the past". This...
See more...
What do you mean by "configure rules for BOTS dataset"? The BOTS dataset comes as preindexed buckets which can cause issues. It's pre-indexed which means it's already indexed "into the past". This means that your scheduled searches spawned by correlation rules which are by default searching through last X minutes or hours worth of data will not match anything simply because the events are already in the past. That's one thing - you'd have to manually search through a time range in the past. Another potential thing (but I've never used the BOTS datasets so I'm not sure what they look like inside; it's just speculating) could be if they were just raw indexed data without the accelerated datamodel summaries. That would make searches running from datamodels with summariesonly=t not find any results. And as the events are indexed in the past it would affect DAS building and retention.
HX can export events in multiple formats as far as I remember (bonus question - are you talking about operational logs or security events?) so you can also look on the HX's side to check its configur...
See more...
HX can export events in multiple formats as far as I remember (bonus question - are you talking about operational logs or security events?) so you can also look on the HX's side to check its configuration.
Hello, oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results) for item in reader: if(isinstance(item, dict...
See more...
Hello, oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results) for item in reader: if(isinstance(item, dict)): for key in item: if(key == '<...>'): A = str(item[key]) print('A is :',A) The above code was working till yesterday. Now it does not enter the 1st for loop (i.e. for item in reader) anymore. I verified this by adding a print statement before the 1st if statement and it is not printing.
1. CM does not use master_uri (or manager_uri - master_uri is a deprecated setting) unless you're using failover CM. You don't seem to use one here so you don't need this setting (as well as whole st...
See more...
1. CM does not use master_uri (or manager_uri - master_uri is a deprecated setting) unless you're using failover CM. You don't seem to use one here so you don't need this setting (as well as whole stanzas defining those managers) 2. I'm not sure if you're listing settings from deployer or deployment server (if you have SHC, you must use deployer; single SHs can indeed be pushed from DS but is it your situation?). In either case, apps destined for SH(s) should be either put into $SPLUNK_HOME/shcluster or $SPLUNK_HOME/deployment-apps The whole setup seems a bit strange.
That's not how it works. I don't know this particular solution but I'm assuming it's trying to connect to some Dynatrace API endpoint to retrieve data. For this it has to verify its certificate. Not...
See more...
That's not how it works. I don't know this particular solution but I'm assuming it's trying to connect to some Dynatrace API endpoint to retrieve data. For this it has to verify its certificate. Not yours. You don't use your Splunk certificate here nor you use the certificate of the CA that issued your Splunk's certificate. For the verification to work you must trust the RootCA which is in the root of the certification path of the server providing the Dynatrace API.