All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I think I've seen this somewhere. For some reason map sometimes behaves differently if the search is specified in square brackets and differently if it's passed as parameter to the search= option. T... See more...
I think I've seen this somewhere. For some reason map sometimes behaves differently if the search is specified in square brackets and differently if it's passed as parameter to the search= option. Try the latter form (remembering about proper escaping) | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map search="search index=\"$Index$\" _raw=\"*$SearchString$*\" | bin span=\"$Tdiv$\" _time"  
Hello,  Can splunk python sdk be used along with a summary index? How? I wish to schedule periodic querying and extracting the data from Splunk for which I usually used the SDK like this and it wor... See more...
Hello,  Can splunk python sdk be used along with a summary index? How? I wish to schedule periodic querying and extracting the data from Splunk for which I usually used the SDK like this and it works for 1 time run as I removed my "collect index ..." code from my query - service = client.connect( host=HOST,  port=PORT,  username=USERNAME,  password=PASSWORD) kwargs_oneshot = {"earliest_time": "-1h",   "latest_time": "now",   "output_mode": 'json', "count" : 100} searchquery_oneshot = "search <query>"  # if i want collected index results to be used below periodically i.e. every 1 hour, what change do I make in my code? oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) Thanks
I am unable to see any logs in splunk from my spring boot application. I am adding my xml property file, controller file, dependency file and splunk data input screenshots to help resolving the issue... See more...
I am unable to see any logs in splunk from my spring boot application. I am adding my xml property file, controller file, dependency file and splunk data input screenshots to help resolving the issue. I am breaking my head for past couple of days and unable to find what I am missing. HEC data input config UI HEC data input edit UI Global Settings The following API is logging events in my index: curl -k http://localhost:8088/services/collector/event -H "Authorization: Splunk ***" -d '{"event": "Hello, Splunk!"}' This is my log4j2-spring.xml: <?xml version="1.0" encoding="UTF-8"?> <Configuration> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{ISO8601}} %highlight{%-5level }[%style{%t}{bright,blue}] %style{%C{10}}{bright,yellow}: %msg%n%throwable" /> </Console> <SplunkHttp name="splunkhttp" url="http://localhost:8088" token="***" host="localhost" index="customer_api_dev" type="raw" source="http-event-logs" sourcetype="log4j" messageFormat="text" disableCertificateValidation="true"> <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <!-- LOG everything at DEBUG level --> <Root level="debug"> <AppenderRef ref="console" /> <AppenderRef ref="splunkhttp" /> </Root> </Loggers> </Configuration> This is my controller: package com.example.advanceddbconcepts.controller; import com.example.advanceddbconcepts.entity.Customer; import com.example.advanceddbconcepts.entity.Order; import com.example.advanceddbconcepts.service.CustomerService; import lombok.Getter; import lombok.Setter; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.*; import java.util.List; @RestController @RequestMapping("/api/customers") public class CustomerController { Logger logger = LogManager.getLogger(CustomerController.class); private final CustomerService customerService; public CustomerController(CustomerService customerService) { this.customerService = customerService; } @PostMapping public ResponseEntity<Customer> createCustomerWithOrder(@RequestBody CustomerRequestOrder request) { Customer customer = new Customer(request.getCustomerName()); logger.info("Created a customer with name {}", request.getCustomerName()); List<Order> orders = request .getProductName() .stream() .map(Order::new) .toList(); Customer savedCustomer = customerService.createCustomerWithOrder(customer, orders); logger.info("API is successful"); return ResponseEntity.ok().body(savedCustomer); } @Getter @Setter public static class CustomerRequestOrder { private String customerName; private List<String> productName; } } I have added below dependencies in my pom.xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> <version>3.3.3</version> </dependency> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.11.8</version> </dependency> </dependencies> I am unable to see any logs in splunk after I hit the API. I am able to see logs in my local: 2024-09-02T19:37:00.629+05:30 INFO 24912 --- [nio-8080-exec-4] c.e.a.controller.CustomerController : Created a customer with name John Doe 2024-09-02T19:37:00.667+05:30 INFO 24912 --- [nio-8080-exec-4] c.e.a.controller.CustomerController : API is successful  
I try to use lookup to specify span option value in bin command with map   | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map [ search index="$Index$" _raw="*$SearchString$*" | b... See more...
I try to use lookup to specify span option value in bin command with map   | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map [ search index="$Index$" _raw="*$SearchString$*" | bin span="$Tdiv$" _time]   The previous request fails with  :  Error in 'bin' command: The value for option span (Tdiv) is invalid. When span is expressed using a sub-second unit (ds, cs, ms, us), the span value needs to be < 1 second, and 1 second must be evenly divisible by the span value. Example of values in Tdiv field : 15m 1h could you help me with this problem?
I suppose you had to authorize that website in dashboard conf files, am I right?
Hi folks.. I have an issue where I can't get an event to break right. The event looks like this   ************************************ 2024.09.03.141001 ************************************ s... See more...
Hi folks.. I have an issue where I can't get an event to break right. The event looks like this   ************************************ 2024.09.03.141001 ************************************ sqlplus -S -L swiftfilter/_REMOVED_@PPP @"long_lock_alert.sql" TAG COUNT(*) --------------- ---------- PPP_locks_count 0 TAG COUNT(*) --------------- ---------- PPP_locks_count 0 SUCCESS End Time: 2024.09.03.141006   Props looks like this:   [nk_pp_tasks] SHOULD_LINEMERGE=false LINE_BREAKER=End Time([^\*]+) NO_BINARY_CHECK=true TIME_FORMAT=%Y.%m.%d.%H%M%S TIME_PREFIX=^.+[\r\n]\s BREAK_ONLY_BEFORE_DATE = false   Outcome is this:   When the logfile is imported through 'Add Data' everything looks fine and the event has not been broken up in 3. Any idees on how to make Splunk not break up the event ?
Hi @MCW   1. How many events are returned by your <SPL search>? 2. Can you share the output of your <SPL search> that you used (e.g. as CSV)? I'd like to replicate your situation on my server. 3.... See more...
Hi @MCW   1. How many events are returned by your <SPL search>? 2. Can you share the output of your <SPL search> that you used (e.g. as CSV)? I'd like to replicate your situation on my server. 3. Do you have access to the server where Splunk is running on? If yes, can you provide the output of the following two commands? ./splunk show config mlspl | grep max_inputs ./splunk btool mlspl list --debug | grep max_inputs   Without knowing any more details, my guess is that your <SPL search> returned more events than you allow in your max_inputs setting (e.g. if your search returns 200'000 events and your max_inputs=100'000). Consequently, the number of events are downsampled by DSDL/MLTK. The resulting my_test_data.csv with 1153 lines that you see within the jupyter notebook environment is exactly this sample.  Regards, Gabriel
So everything is the same except the metrics are different, the data is different and generally we don't know what and why "doesn't work", right? But seriously. The data is important here as well as... See more...
So everything is the same except the metrics are different, the data is different and generally we don't know what and why "doesn't work", right? But seriously. The data is important here as well as what your transform looks like. Look at the Masa diagrams https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 I haven't worked with metrics much but I'd say metric schema is invoked after transforms so you need to filter your data by raw event contents.
https://regex101.com/r/Op8H3R/1
| rex max_match=0 "(?<alarm>RAISE-ALARM[^;]+;)" Regex101.com is a good place to try and learn regular expressions https://regex101.com/r/F3vySr/1
We're experiencing the same issues. We're running version 9.3.0 with a separate indexer, search head, and license server. All of our servers are affected by the memory leak. This started after we u... See more...
We're experiencing the same issues. We're running version 9.3.0 with a separate indexer, search head, and license server. All of our servers are affected by the memory leak. This started after we upgraded from version 9.1.3. We were hoping that subsequent updates would fix it. Is there any way we can assist to expedite your case?
1. Please don't post multiple thread about extracting fields from the same set of data. 2. Try to be more descriptive in naming the topic of the thread. "Regular expression" doesn't tell much about ... See more...
1. Please don't post multiple thread about extracting fields from the same set of data. 2. Try to be more descriptive in naming the topic of the thread. "Regular expression" doesn't tell much about the thread contents.
HI , I want to extract purple part. But Severity can be Critical as well . [Time:29-08@17:52:05.880] [60569130] 17:52:28.604 10.82.10.245 local0.notice [S=2952486] [BID=d57afa:30] RAISE-ALARM:ac... See more...
HI , I want to extract purple part. But Severity can be Critical as well . [Time:29-08@17:52:05.880] [60569130] 17:52:28.604 10.82.10.245 local0.notice [S=2952486] [BID=d57afa:30] RAISE-ALARM:acBoardEthernetLinkAlarm: [KOREASBC1] Ethernet link alarm. LAN port number 3 is down.; Severity:minor; Source:Board#1/EthernetLink#3; Unique ID:206; Additional Info1:GigabitEthernet 4/3; Additional Info2:SEL-SBC01; [Time:29-08@17:52:28.604] [60569131] 17:52:28.605 10.82.10.245 local0.warning [S=2952487] [BID=d57afa:30] RAISE-ALARM:acEthernetGroupAlarm: [KOREASBC1] Ethernet Group alarm. Ethernet Group 2 is Down.; Severity:major; Source:Board#1/EthernetGroup#2; Unique ID:207; Additional Info1:; [Time:29-08@17:52:28.605] [60569132] 17:52:28.721 10.82.10.245 local0.notice [S=2952488] [BID=d57afa:30] SYS_HA: Redundant unit physical network interface error fixed. [Code:0x46000] [Time:29-08@17:52:28.721] [60569133]
hi i want to extract purple part. [Time:29-08@17:53:05.654] [60569222] 17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ... See more...
hi i want to extract purple part. [Time:29-08@17:53:05.654] [60569222] 17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ClientTransaction(#471) failed sending message with CSeq 1 OPTIONS CallID 20478380282982024175249@1.215.255.202, the cause is Transport Error [Time:29-08@17:53:05.654] [60569223] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655] [60569224] 17:53:05.656 10.82.10.245 local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] [60569225] 17:53:05.657 10.82.10.245 local3.notice [S=2952581] [SID=d57afa:30:1773434] (N 71121560) AcSIPDialog(#28): Handling DIALOG_DISCONNECT_REQ in state DialogInitiated
Hi guys, i'm in need to delete some records from the deployment server although, when i do it via the forwarder management i get the "This functionality has been deprecated" alert. Is there any other... See more...
Hi guys, i'm in need to delete some records from the deployment server although, when i do it via the forwarder management i get the "This functionality has been deprecated" alert. Is there any other way i can proceed?
I'm referring to the original post.  @markhvesta said that his transforms are not working for metrics data. I have the same issue (metric names are of course different).  So, configuration is already... See more...
I'm referring to the original post.  @markhvesta said that his transforms are not working for metrics data. I have the same issue (metric names are of course different).  So, configuration is already here, I don't have to paste my configuration. Regex is working (tested on regex101). And the main question in this post is "Any ideas if there is a special way to do this [for metrics data]?"
What do you mean by "configure rules for BOTS dataset"? The BOTS dataset comes as preindexed buckets which can cause issues.  It's pre-indexed which means it's already indexed "into the past". This... See more...
What do you mean by "configure rules for BOTS dataset"? The BOTS dataset comes as preindexed buckets which can cause issues.  It's pre-indexed which means it's already indexed "into the past". This means that your scheduled searches spawned by correlation rules which are by default searching through last X minutes or hours worth of data will not match anything simply because the events are already in the past. That's one thing - you'd have to manually search through a time range in the past. Another potential thing (but I've never used the BOTS datasets so I'm not sure what they look like inside; it's just speculating) could be if they were just raw indexed data without the accelerated datamodel summaries. That would make searches running from datamodels with summariesonly=t not find any results. And as the events are indexed in the past it would affect DAS building and retention.
Maybe so that you can show _your_ config, _your_ data and say what exactly does or doesn't work in your case.
Thank you @PickleRick , Changed the master to Manager in Cluster and URi which worked
HX can export events in multiple formats as far as I remember (bonus question - are you talking about operational logs or security events?) so you can also look on the HX's side to check its configur... See more...
HX can export events in multiple formats as far as I remember (bonus question - are you talking about operational logs or security events?) so you can also look on the HX's side to check its configuration.