All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Guys,   Has anyone done a search were you can monitor the CPU on the Fortinet Firewalls? Its on the App but doesn't seem to work?   Cheers Ahmed
Hello,  Can splunk python sdk be used along with a summary index? How? I wish to schedule periodic querying and extracting the data from Splunk for which I usually used the SDK like this and it wor... See more...
Hello,  Can splunk python sdk be used along with a summary index? How? I wish to schedule periodic querying and extracting the data from Splunk for which I usually used the SDK like this and it works for 1 time run as I removed my "collect index ..." code from my query - service = client.connect( host=HOST,  port=PORT,  username=USERNAME,  password=PASSWORD) kwargs_oneshot = {"earliest_time": "-1h",   "latest_time": "now",   "output_mode": 'json', "count" : 100} searchquery_oneshot = "search <query>"  # if i want collected index results to be used below periodically i.e. every 1 hour, what change do I make in my code? oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) Thanks
I am unable to see any logs in splunk from my spring boot application. I am adding my xml property file, controller file, dependency file and splunk data input screenshots to help resolving the issue... See more...
I am unable to see any logs in splunk from my spring boot application. I am adding my xml property file, controller file, dependency file and splunk data input screenshots to help resolving the issue. I am breaking my head for past couple of days and unable to find what I am missing. HEC data input config UI HEC data input edit UI Global Settings The following API is logging events in my index: curl -k http://localhost:8088/services/collector/event -H "Authorization: Splunk ***" -d '{"event": "Hello, Splunk!"}' This is my log4j2-spring.xml: <?xml version="1.0" encoding="UTF-8"?> <Configuration> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{ISO8601}} %highlight{%-5level }[%style{%t}{bright,blue}] %style{%C{10}}{bright,yellow}: %msg%n%throwable" /> </Console> <SplunkHttp name="splunkhttp" url="http://localhost:8088" token="***" host="localhost" index="customer_api_dev" type="raw" source="http-event-logs" sourcetype="log4j" messageFormat="text" disableCertificateValidation="true"> <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <!-- LOG everything at DEBUG level --> <Root level="debug"> <AppenderRef ref="console" /> <AppenderRef ref="splunkhttp" /> </Root> </Loggers> </Configuration> This is my controller: package com.example.advanceddbconcepts.controller; import com.example.advanceddbconcepts.entity.Customer; import com.example.advanceddbconcepts.entity.Order; import com.example.advanceddbconcepts.service.CustomerService; import lombok.Getter; import lombok.Setter; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.*; import java.util.List; @RestController @RequestMapping("/api/customers") public class CustomerController { Logger logger = LogManager.getLogger(CustomerController.class); private final CustomerService customerService; public CustomerController(CustomerService customerService) { this.customerService = customerService; } @PostMapping public ResponseEntity<Customer> createCustomerWithOrder(@RequestBody CustomerRequestOrder request) { Customer customer = new Customer(request.getCustomerName()); logger.info("Created a customer with name {}", request.getCustomerName()); List<Order> orders = request .getProductName() .stream() .map(Order::new) .toList(); Customer savedCustomer = customerService.createCustomerWithOrder(customer, orders); logger.info("API is successful"); return ResponseEntity.ok().body(savedCustomer); } @Getter @Setter public static class CustomerRequestOrder { private String customerName; private List<String> productName; } } I have added below dependencies in my pom.xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> <version>3.3.3</version> </dependency> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.11.8</version> </dependency> </dependencies> I am unable to see any logs in splunk after I hit the API. I am able to see logs in my local: 2024-09-02T19:37:00.629+05:30 INFO 24912 --- [nio-8080-exec-4] c.e.a.controller.CustomerController : Created a customer with name John Doe 2024-09-02T19:37:00.667+05:30 INFO 24912 --- [nio-8080-exec-4] c.e.a.controller.CustomerController : API is successful  
I try to use lookup to specify span option value in bin command with map   | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map [ search index="$Index$" _raw="*$SearchString$*" | b... See more...
I try to use lookup to specify span option value in bin command with map   | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map [ search index="$Index$" _raw="*$SearchString$*" | bin span="$Tdiv$" _time]   The previous request fails with  :  Error in 'bin' command: The value for option span (Tdiv) is invalid. When span is expressed using a sub-second unit (ds, cs, ms, us), the span value needs to be < 1 second, and 1 second must be evenly divisible by the span value. Example of values in Tdiv field : 15m 1h could you help me with this problem?
Hi folks.. I have an issue where I can't get an event to break right. The event looks like this   ************************************ 2024.09.03.141001 ************************************ s... See more...
Hi folks.. I have an issue where I can't get an event to break right. The event looks like this   ************************************ 2024.09.03.141001 ************************************ sqlplus -S -L swiftfilter/_REMOVED_@PPP @"long_lock_alert.sql" TAG COUNT(*) --------------- ---------- PPP_locks_count 0 TAG COUNT(*) --------------- ---------- PPP_locks_count 0 SUCCESS End Time: 2024.09.03.141006   Props looks like this:   [nk_pp_tasks] SHOULD_LINEMERGE=false LINE_BREAKER=End Time([^\*]+) NO_BINARY_CHECK=true TIME_FORMAT=%Y.%m.%d.%H%M%S TIME_PREFIX=^.+[\r\n]\s BREAK_ONLY_BEFORE_DATE = false   Outcome is this:   When the logfile is imported through 'Add Data' everything looks fine and the event has not been broken up in 3. Any idees on how to make Splunk not break up the event ?
HI , I want to extract purple part. But Severity can be Critical as well . [Time:29-08@17:52:05.880] [60569130] 17:52:28.604 10.82.10.245 local0.notice [S=2952486] [BID=d57afa:30] RAISE-ALARM:ac... See more...
HI , I want to extract purple part. But Severity can be Critical as well . [Time:29-08@17:52:05.880] [60569130] 17:52:28.604 10.82.10.245 local0.notice [S=2952486] [BID=d57afa:30] RAISE-ALARM:acBoardEthernetLinkAlarm: [KOREASBC1] Ethernet link alarm. LAN port number 3 is down.; Severity:minor; Source:Board#1/EthernetLink#3; Unique ID:206; Additional Info1:GigabitEthernet 4/3; Additional Info2:SEL-SBC01; [Time:29-08@17:52:28.604] [60569131] 17:52:28.605 10.82.10.245 local0.warning [S=2952487] [BID=d57afa:30] RAISE-ALARM:acEthernetGroupAlarm: [KOREASBC1] Ethernet Group alarm. Ethernet Group 2 is Down.; Severity:major; Source:Board#1/EthernetGroup#2; Unique ID:207; Additional Info1:; [Time:29-08@17:52:28.605] [60569132] 17:52:28.721 10.82.10.245 local0.notice [S=2952488] [BID=d57afa:30] SYS_HA: Redundant unit physical network interface error fixed. [Code:0x46000] [Time:29-08@17:52:28.721] [60569133]
hi i want to extract purple part. [Time:29-08@17:53:05.654] [60569222] 17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ... See more...
hi i want to extract purple part. [Time:29-08@17:53:05.654] [60569222] 17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ClientTransaction(#471) failed sending message with CSeq 1 OPTIONS CallID 20478380282982024175249@1.215.255.202, the cause is Transport Error [Time:29-08@17:53:05.654] [60569223] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655] [60569224] 17:53:05.656 10.82.10.245 local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] [60569225] 17:53:05.657 10.82.10.245 local3.notice [S=2952581] [SID=d57afa:30:1773434] (N 71121560) AcSIPDialog(#28): Handling DIALOG_DISCONNECT_REQ in state DialogInitiated
Hi guys, i'm in need to delete some records from the deployment server although, when i do it via the forwarder management i get the "This functionality has been deprecated" alert. Is there any other... See more...
Hi guys, i'm in need to delete some records from the deployment server although, when i do it via the forwarder management i get the "This functionality has been deprecated" alert. Is there any other way i can proceed?
Hello, oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results) for item in reader:   if(isinstance(item, dict... See more...
Hello, oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results) for item in reader:   if(isinstance(item, dict)):     for key in item:       if(key == '<...>'):         A = str(item[key])         print('A is :',A) The above code was working till yesterday. Now it does not enter the 1st for loop (i.e. for item in reader) anymore. I verified this by adding a print statement before the 1st if statement and it is not printing.
hello , i want to extract purple highlighted part. [Time:29-08@17:53:03.562] [60569219] 17:53:03.562 10.82.10.245 local3.notice [S=2952575] [SID=d57afa:30:1773441](N 71121555) #98)gwSession[Deallo... See more...
hello , i want to extract purple highlighted part. [Time:29-08@17:53:03.562] [60569219] 17:53:03.562 10.82.10.245 local3.notice [S=2952575] [SID=d57afa:30:1773441](N 71121555) #98)gwSession[Deallocated] [Time:29-08@17:53:03.562] [60569220]17:53:05.158 10.82.10.245 local3.notice [S=2952576] [SID=d57afa:30:1773434] (N 71121556) RtxMngr::Transmit 1 OPTIONS Rtx Left: 0 Dest: 211.237.70.18:5060, TU: AcSIPDialog(#28)(N 71121557) SIPTransaction(#471)::SendMsgBuffer - Resending last message[Time:29-08@17:53:05.158] [60569221] 17:53:05.654 10.82.10.245 local3.notice [S=2952577] [SID=d57afa:30:1773434] (N 71121558) RtxMngr::Dispatch - Retransmission of message 1 OPTIONS was ended. Terminating transaction... [Time:29-08@17:53:05.654] [60569222]17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ClientTransaction(#471) failed sending message with CSeq 1 OPTIONS CallID 20478380282982024175249@1.215.255.202, the cause is Transport Error [Time:29-08@17:53:05.654] [60569223]17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655] [60569224] 17:53:05.656 10.82.10.245 local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] [60569225] 17:53:05.657 10.82.10.245
Hello,  instance principal authentication not working in OC19 realm. Any plan to support OC19? The debug log contains: 2024-09-03 08:16:14,077 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/interme... See more...
Hello,  instance principal authentication not working in OC19 realm. Any plan to support OC19? The debug log contains: 2024-09-03 08:16:14,077 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/intermediate.pem HTTP/1.1" 200 None 2024-09-03 08:16:14,413 DEBUG Starting new HTTP connection (1): x.x.x.x:80 2024-09-03 08:16:14,416 DEBUG http://x.x.x.x:80 "GET /opc/v2/instance/region HTTP/1.1" 200 14 2024-09-03 08:16:14,416 DEBUG Unknown regionId 'eu-frankfurt-2', will assume it's in Realm OC1 2024-09-03 08:16:14,636 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/cert.pem HTTP/1.1" 200 None 2024-09-03 08:16:14,646 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/key.pem HTTP/1.1" 200 1675 2024-09-03 08:16:14,692 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/intermediate.pem HTTP/1.1" 200 None 2024-09-03 08:16:14,695 DEBUG Starting new HTTPS connection (1): auth.eu-frankfurt-2.oraclecloud.com:443 Thank you! NagyG
I have events from Trellix Hx appliance and i need to adjust _time of the log events   because it coming as 9/3/20 and we are on 9/3/2024  how can this be changeable.   thanks
Hello, I am currently working in a SOC, and I want to test rules in Splunk ES using the BOTSv2 dataset. How can I configure all the rules for it?
Hi Community, I got trouble when want to activate Use Case "User Login to Unauthorized Geo" it said Error because it said i don't have "sse_host_to_country" and "gdpr_user_category" lookup data.  ... See more...
Hi Community, I got trouble when want to activate Use Case "User Login to Unauthorized Geo" it said Error because it said i don't have "sse_host_to_country" and "gdpr_user_category" lookup data.  In this case im using ES Content Updates v 4.0.0 but i have my labs with ES Content Updates 4.38.0 but when i check it it don't have any sse_host_to_country OR gdpr_user_category lookup files. Already searching it in google and i don't have any answer. Maybe this community have enough experience about this.  Thanks     
There is no default solution in Splunk for managing the Frozen Bucket (Path). I wrote a script where you provide a config file specifying the volume or time limit for logs in the Frozen Path for each... See more...
There is no default solution in Splunk for managing the Frozen Bucket (Path). I wrote a script where you provide a config file specifying the volume or time limit for logs in the Frozen Path for each index. If the policy is violated, the oldest log is deleted. The script also provides detailed logs of the deletion process, including how much data and time remains in the Frozen Path for each index and how long the deletion process took. The entire script runs as a service and executes once every 24 hours. I’ve explained the implementation details and all necessary information in the link below.   Mohammad-Mirasadollahi/Splunk-Frozen-Retention-Policy: This repository provides a set of Bash scripts designed to manage frozen data in Splunk environments. (github.com)   FrozenFreeUp 
Hi -  We have a requirement to join the below eval statement searches, would it be possible if someone could assist with the solution please? eval search 1 = index="wpg" host=*pz-pay* OrderSummar... See more...
Hi -  We have a requirement to join the below eval statement searches, would it be possible if someone could assist with the solution please? eval search 1 = index="wpg" host=*pz-pay* OrderSummary | stats count AS "Total" eval search 2 = index="wpg" host=*pz-pay* OrderSummary AND "Address is invalid, it might contain a card number") | stats count AS "Failure" eval result =( search 1/search 2)*100 Thanks, Tom
Hi All I did a look around for a syntax definition for SPL in Notepad++ and didn't find one. Attached is my attempt. Feel free to use. if you have any suggestions, changes etc then post a reply. Th... See more...
Hi All I did a look around for a syntax definition for SPL in Notepad++ and didn't find one. Attached is my attempt. Feel free to use. if you have any suggestions, changes etc then post a reply. Thanks everyone
I found a similar post that did not quite fit the bill of what I am trying to do. I want to be able to create a link graph that shows a logical flow of all of our data from index>sourcetype>fields... See more...
I found a similar post that did not quite fit the bill of what I am trying to do. I want to be able to create a link graph that shows a logical flow of all of our data from index>sourcetype>fields. Issues I am running into: | fieldsummary does not work with metadata and thus does not include the index or sourcetype. |tstats search is only able to show index and sourcetype. I figure there is a base search I need to set up to pull the initial sourcetypes to run fieldsummaries on, but I'm not sure how to string these techniques together or if something like this is even feasible without leaving a very heavy burden on the cluster. I would like to make this a report that updates a lookup weekly so that the dashboard is referencing the lookup instead of running this search. Thanks in advance for your time!
Hi Team, We are using add-on to collect the Azure metrics through REST API. Data is getting ingested into Splunk cloud. However, we are seeing a lag of exactly 4 hours. Splunk Cloud is in UTC time z... See more...
Hi Team, We are using add-on to collect the Azure metrics through REST API. Data is getting ingested into Splunk cloud. However, we are seeing a lag of exactly 4 hours. Splunk Cloud is in UTC time zone. We have set the TZ=UTC  in HF apps/ local/props.conf as application is writing in UTC time. However, there is a lag in Splunk cloud.  https://splunkbase.splunk.com/app/3110 Any help is highly appreciated.
Other than poor speed and performance, is there a reason why the map command is considered dangerous? The official documentation says that the map command can result in data loss or potential securi... See more...
Other than poor speed and performance, is there a reason why the map command is considered dangerous? The official documentation says that the map command can result in data loss or potential security risks. But I don't see any details. Why?   https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Map