All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We're experiencing the same issues. We're running version 9.3.0 with a separate indexer, search head, and license server. All of our servers are affected by the memory leak. This started after we u... See more...
We're experiencing the same issues. We're running version 9.3.0 with a separate indexer, search head, and license server. All of our servers are affected by the memory leak. This started after we upgraded from version 9.1.3. We were hoping that subsequent updates would fix it. Is there any way we can assist to expedite your case?
1. Please don't post multiple thread about extracting fields from the same set of data. 2. Try to be more descriptive in naming the topic of the thread. "Regular expression" doesn't tell much about ... See more...
1. Please don't post multiple thread about extracting fields from the same set of data. 2. Try to be more descriptive in naming the topic of the thread. "Regular expression" doesn't tell much about the thread contents.
HI , I want to extract purple part. But Severity can be Critical as well . [Time:29-08@17:52:05.880] [60569130] 17:52:28.604 10.82.10.245 local0.notice [S=2952486] [BID=d57afa:30] RAISE-ALARM:ac... See more...
HI , I want to extract purple part. But Severity can be Critical as well . [Time:29-08@17:52:05.880] [60569130] 17:52:28.604 10.82.10.245 local0.notice [S=2952486] [BID=d57afa:30] RAISE-ALARM:acBoardEthernetLinkAlarm: [KOREASBC1] Ethernet link alarm. LAN port number 3 is down.; Severity:minor; Source:Board#1/EthernetLink#3; Unique ID:206; Additional Info1:GigabitEthernet 4/3; Additional Info2:SEL-SBC01; [Time:29-08@17:52:28.604] [60569131] 17:52:28.605 10.82.10.245 local0.warning [S=2952487] [BID=d57afa:30] RAISE-ALARM:acEthernetGroupAlarm: [KOREASBC1] Ethernet Group alarm. Ethernet Group 2 is Down.; Severity:major; Source:Board#1/EthernetGroup#2; Unique ID:207; Additional Info1:; [Time:29-08@17:52:28.605] [60569132] 17:52:28.721 10.82.10.245 local0.notice [S=2952488] [BID=d57afa:30] SYS_HA: Redundant unit physical network interface error fixed. [Code:0x46000] [Time:29-08@17:52:28.721] [60569133]
hi i want to extract purple part. [Time:29-08@17:53:05.654] [60569222] 17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ... See more...
hi i want to extract purple part. [Time:29-08@17:53:05.654] [60569222] 17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ClientTransaction(#471) failed sending message with CSeq 1 OPTIONS CallID 20478380282982024175249@1.215.255.202, the cause is Transport Error [Time:29-08@17:53:05.654] [60569223] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655] [60569224] 17:53:05.656 10.82.10.245 local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] [60569225] 17:53:05.657 10.82.10.245 local3.notice [S=2952581] [SID=d57afa:30:1773434] (N 71121560) AcSIPDialog(#28): Handling DIALOG_DISCONNECT_REQ in state DialogInitiated
Hi guys, i'm in need to delete some records from the deployment server although, when i do it via the forwarder management i get the "This functionality has been deprecated" alert. Is there any other... See more...
Hi guys, i'm in need to delete some records from the deployment server although, when i do it via the forwarder management i get the "This functionality has been deprecated" alert. Is there any other way i can proceed?
I'm referring to the original post.  @markhvesta said that his transforms are not working for metrics data. I have the same issue (metric names are of course different).  So, configuration is already... See more...
I'm referring to the original post.  @markhvesta said that his transforms are not working for metrics data. I have the same issue (metric names are of course different).  So, configuration is already here, I don't have to paste my configuration. Regex is working (tested on regex101). And the main question in this post is "Any ideas if there is a special way to do this [for metrics data]?"
What do you mean by "configure rules for BOTS dataset"? The BOTS dataset comes as preindexed buckets which can cause issues.  It's pre-indexed which means it's already indexed "into the past". This... See more...
What do you mean by "configure rules for BOTS dataset"? The BOTS dataset comes as preindexed buckets which can cause issues.  It's pre-indexed which means it's already indexed "into the past". This means that your scheduled searches spawned by correlation rules which are by default searching through last X minutes or hours worth of data will not match anything simply because the events are already in the past. That's one thing - you'd have to manually search through a time range in the past. Another potential thing (but I've never used the BOTS datasets so I'm not sure what they look like inside; it's just speculating) could be if they were just raw indexed data without the accelerated datamodel summaries. That would make searches running from datamodels with summariesonly=t not find any results. And as the events are indexed in the past it would affect DAS building and retention.
Maybe so that you can show _your_ config, _your_ data and say what exactly does or doesn't work in your case.
Thank you @PickleRick , Changed the master to Manager in Cluster and URi which worked
HX can export events in multiple formats as far as I remember (bonus question - are you talking about operational logs or security events?) so you can also look on the HX's side to check its configur... See more...
HX can export events in multiple formats as far as I remember (bonus question - are you talking about operational logs or security events?) so you can also look on the HX's side to check its configuration.
Hello, oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results) for item in reader:   if(isinstance(item, dict... See more...
Hello, oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results) for item in reader:   if(isinstance(item, dict)):     for key in item:       if(key == '<...>'):         A = str(item[key])         print('A is :',A) The above code was working till yesterday. Now it does not enter the 1st for loop (i.e. for item in reader) anymore. I verified this by adding a print statement before the 1st if statement and it is not printing.
1. CM does not use master_uri (or manager_uri - master_uri is a deprecated setting) unless you're using failover CM. You don't seem to use one here so you don't need this setting (as well as whole st... See more...
1. CM does not use master_uri (or manager_uri - master_uri is a deprecated setting) unless you're using failover CM. You don't seem to use one here so you don't need this setting (as well as whole stanzas defining those managers) 2. I'm not sure if you're listing settings from deployer or deployment server (if you have SHC, you must use deployer; single SHs can indeed be pushed from DS but is it your situation?). In either case, apps destined for SH(s) should be either put into $SPLUNK_HOME/shcluster or $SPLUNK_HOME/deployment-apps The whole setup seems a bit strange.
| rex "(?<time>\[Time:[^\]]+\])"
That's not how it works. I don't know this particular solution but I'm assuming it's trying to connect to some Dynatrace API endpoint to retrieve data. For this it has to verify its certificate. Not... See more...
That's not how it works. I don't know this particular solution but I'm assuming it's trying to connect to some Dynatrace API endpoint to retrieve data. For this it has to verify its certificate. Not yours. You don't use your Splunk certificate here nor you use the certificate of the CA that issued your Splunk's certificate. For the verification to work you must trust the RootCA which is in the root of the certification path of the server providing the Dynatrace API.
OK. Mind you that those are not directly Splunk-related things, it's more like my personal outlook based on 25 years of admin experience. 1. For me, you're doing too many things. I understand your a... See more...
OK. Mind you that those are not directly Splunk-related things, it's more like my personal outlook based on 25 years of admin experience. 1. For me, you're doing too many things. I understand your approach, but I prefer the KISS approach when doing shell scripts. For more complex things I'd go python. But that's my personal taste. I don't like overly complicated bash scripts because they tend to get messy quickly. To be honest, I would do it completely another way around - do a single simple script to manage single frozen index storage with given parameters (size/time) and possibly add another "managing" script spawning the single script for each index independently. 2. I don't see the point in generating random PROCESS_ID. And even less so - using an external dependency of openssl to generate the value of this variable. 3. You are hardcoding many paths - LOG_FILE, CONFIG_FILE, FROZEN_PATH... It might be ok if you're only writing a one-off script for internal uses. When doing a portable solution it's much more user-friendly to make it configurable. The easiest way would be to externalize those definitions to another file and include from that file using the dot (source) command. Bonus - you can use the same config file in both scripts. Presently you have to configure both scripts separately. 4. Chmod another script so you can run it... that's not nice. It should be in installation instructions. 5. I don't like the idea of a script to create the service file. Just provide a service file template with the instructions to customize it if needed. (I would probably do it with cron instead of a service but that's me - I'm old). 6. IMHO such script manipulating relatively sensitive data should use a lock file to prevent it from being run multiple times in parallel. 7. The mechanics of deleting frozen buckets is highly suboptimal. You're spawning several finds and du after removing each file. That's a lot of unnecessary disk scanning. Also - why removing files from the bucket directory and after that removing an empty directory? 8. To make the script consistent with how Splunk handles buckets you should not use ctime or mtime but rather take the timestamps from the bucket boundaries. (they might result in the same order since probably buckets will be frozen in the same order they should roll out from frozen but - especially if you're using shared storage for frozen across multiple cluster nodes and do deduplication - it's not granted). 9. Sorry to say that but it shows that it was written with ChatGPT - there are some design choices which are inconsistent (like timestamp manipulation and sometimes doing arithmetics using built-in bash functionality whereas other times spawning bc). So again - I do appreciate the effort. It's just that I would either do it completely differently (which might be simply my personal taste) or - if it was to be a quick and dirty hack - I would simply use tmpreaper (if your distro provides it) or do find /frozen_path -ctime +Xd -delete (yes, it doesn't account for size limits, but is quick and reliable) If you want to use size limits, just list directory sizes, sort by date, sum them up until you hit the limit, delete the rest. Et voila. Honestly, don't overthink it.
hello , i want to extract purple highlighted part. [Time:29-08@17:53:03.562] [60569219] 17:53:03.562 10.82.10.245 local3.notice [S=2952575] [SID=d57afa:30:1773441](N 71121555) #98)gwSession[Deallo... See more...
hello , i want to extract purple highlighted part. [Time:29-08@17:53:03.562] [60569219] 17:53:03.562 10.82.10.245 local3.notice [S=2952575] [SID=d57afa:30:1773441](N 71121555) #98)gwSession[Deallocated] [Time:29-08@17:53:03.562] [60569220]17:53:05.158 10.82.10.245 local3.notice [S=2952576] [SID=d57afa:30:1773434] (N 71121556) RtxMngr::Transmit 1 OPTIONS Rtx Left: 0 Dest: 211.237.70.18:5060, TU: AcSIPDialog(#28)(N 71121557) SIPTransaction(#471)::SendMsgBuffer - Resending last message[Time:29-08@17:53:05.158] [60569221] 17:53:05.654 10.82.10.245 local3.notice [S=2952577] [SID=d57afa:30:1773434] (N 71121558) RtxMngr::Dispatch - Retransmission of message 1 OPTIONS was ended. Terminating transaction... [Time:29-08@17:53:05.654] [60569222]17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ClientTransaction(#471) failed sending message with CSeq 1 OPTIONS CallID 20478380282982024175249@1.215.255.202, the cause is Transport Error [Time:29-08@17:53:05.654] [60569223]17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655] [60569224] 17:53:05.656 10.82.10.245 local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] [60569225] 17:53:05.657 10.82.10.245
I also have a lookup which is being updated but the user is n/a. It's a csv lookup. I cannot find any relevant occurrences of outputlooup before the update event. What other ways than using outputlo... See more...
I also have a lookup which is being updated but the user is n/a. It's a csv lookup. I cannot find any relevant occurrences of outputlooup before the update event. What other ways than using outputlookup could there be which resulted in the lookup being updated?
I also encountered exactly the same problem on my search head cluster. Now I'm on version 9.1.5 and still having this issue.
Hi @arielpconsolaci , I think you just need to replace ['vizapi/SplunkVisualizationBase', 'vizapi/SplunkVisualizationUtils'] with ['api/SplunkVisualizationBase', 'api/SplunkVisualizationUtils'] in b... See more...
Hi @arielpconsolaci , I think you just need to replace ['vizapi/SplunkVisualizationBase', 'vizapi/SplunkVisualizationUtils'] with ['api/SplunkVisualizationBase', 'api/SplunkVisualizationUtils'] in both visualization_src.js and webpack.config.js
Please share your configuration as it is probably something amiss there. Also, please share your raw event (anonymised of course), preferably in a code block so we can see all the spacing, so we can ... See more...
Please share your configuration as it is probably something amiss there. Also, please share your raw event (anonymised of course), preferably in a code block so we can see all the spacing, so we can figure out what needs changing in the configuration.