All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everyone, i found the solution with my team: In addition to changing the output.conf by inserting the appropriate sourcetype. the moment the header is still not removed we followed this proc... See more...
Hello everyone, i found the solution with my team: In addition to changing the output.conf by inserting the appropriate sourcetype. the moment the header is still not removed we followed this procedure: by going to change the following template definition of the rsyslog file on all UFs, removing %TIMESTAMP% %HOSTNAME% (the one that appears in the header) within the configuration. bye, G.
Yes, but as I understand, that's not the issue. If you copy the same contents several times over into a single file and upload it to Splunk via "add data" dialog with the settings @jesperbassoe provi... See more...
Yes, but as I understand, that's not the issue. If you copy the same contents several times over into a single file and upload it to Splunk via "add data" dialog with the settings @jesperbassoe provided, it does get properly split into separate events. True, the final timestamp is getting discarded as it is treated as a linebreaker but apart from that the stream is properly broken into events. The screenshot however shows the event butchered into separate parts which doesn't really match the LINE_BREAKER definition. So the questions are: 1) Where are the settings defined (on which components; and are there any other conflicting and possibly overriding settings)? 2) How is the file ingested (most probably by monitor input on an UF)?
The existing props are discarding the End Time value because of the LINE_BREAKER setting.  LINE_BREAKER always throws out the text that matches the first capture group. Try these settings. [nk_pp_t... See more...
The existing props are discarding the End Time value because of the LINE_BREAKER setting.  LINE_BREAKER always throws out the text that matches the first capture group. Try these settings. [nk_pp_tasks] SHOULD_LINEMERGE=false LINE_BREAKER=End Time:[^\*]+?() NO_BINARY_CHECK=true TIME_FORMAT=%Y.%m.%d.%H%M%S TIME_PREFIX=\*\*+ BREAK_ONLY_BEFORE_DATE = false  
There is no such thing as "summary index" as a separate type of index Anyway, are you sure the user you're running your search with can use the collect command?
OK. This is a search from a particular accelerated datamodel. So for this to work three things must be configured properly. 1) You must be getting proper logs from the firewall. 2) You must have th... See more...
OK. This is a search from a particular accelerated datamodel. So for this to work three things must be configured properly. 1) You must be getting proper logs from the firewall. 2) You must have the datamodel configured properly (I suppose you either have to ingest firewall data to a specific index or have to reconfigure the datamodel to cover the index you're ingesting your fw events into). 3) And finally you must have datamodel acceleration enabled for that datamodel. So these are three things that must happen before that dashboard can be populated with results. BTW, you pointed to a SOAR app as relevant products for this thread. I suppose you meant the Fortinet FortiGate App - https://splunkbase.splunk.com/app/2800 - it does have a description section which seems to tell how to configure it (but I'd be cautious about the instructions for both this app and an accompanying add-on because it's a third-party add-on and vendors don't always know Splunk well and some of their ideas can be far from the best practice).
Summary indexes are no different from other indexes so the code you use to access one should work for the other. How does the existing code fail?  What error messages do you see?  Have you checked s... See more...
Summary indexes are no different from other indexes so the code you use to access one should work for the other. How does the existing code fail?  What error messages do you see?  Have you checked search.log? It's possible the query is being caught by the "risky code" trap because the collect command is considered a risky one.  To avoid that, add the following lines to a command.conf file (not system/default) [collect] is_risky = false
Hi Picklerick,  So the Forti app has a n event dashboard to view the CPU and Memory: But when you open the search you get no results: |tstats summariesonly=true last(log.system_event.syste... See more...
Hi Picklerick,  So the Forti app has a n event dashboard to view the CPU and Memory: But when you open the search you get no results: |tstats summariesonly=true last(log.system_event.system.cpu) AS cpus FROM datamodel=ftnt_fos WHERE nodename="log.system_event.system" log.devname="*" log.vendor_action=perf-stats groupby _time log.devname | timechart values(cpus) by log.devname   New to Splunk so just wondering if there is something here i need to mod...   cheers  
1. You don't "monitor the CPU" with Splunk as in "use search to interactively connect to the device and check its parameters". You can search the data that has been ingested prior to search. So... 2... See more...
1. You don't "monitor the CPU" with Splunk as in "use search to interactively connect to the device and check its parameters". You can search the data that has been ingested prior to search. So... 2. Do you have any data fron your firewall ingested? Do you know where it is? Can you search it at all? Do you know _what_ data is ingested from the firewall? 3. What does "doesn't seem to work" mean? What are you doing (especially - what search are you running) and what are the results?
Hi Guys,   Has anyone done a search were you can monitor the CPU on the Fortinet Firewalls? Its on the App but doesn't seem to work?   Cheers Ahmed
hi @PaulPanther  This is the screen shot of Job --> Inspect Job. Please I need help on this asap.  
I think I've seen this somewhere. For some reason map sometimes behaves differently if the search is specified in square brackets and differently if it's passed as parameter to the search= option. T... See more...
I think I've seen this somewhere. For some reason map sometimes behaves differently if the search is specified in square brackets and differently if it's passed as parameter to the search= option. Try the latter form (remembering about proper escaping) | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map search="search index=\"$Index$\" _raw=\"*$SearchString$*\" | bin span=\"$Tdiv$\" _time"  
Hello,  Can splunk python sdk be used along with a summary index? How? I wish to schedule periodic querying and extracting the data from Splunk for which I usually used the SDK like this and it wor... See more...
Hello,  Can splunk python sdk be used along with a summary index? How? I wish to schedule periodic querying and extracting the data from Splunk for which I usually used the SDK like this and it works for 1 time run as I removed my "collect index ..." code from my query - service = client.connect( host=HOST,  port=PORT,  username=USERNAME,  password=PASSWORD) kwargs_oneshot = {"earliest_time": "-1h",   "latest_time": "now",   "output_mode": 'json', "count" : 100} searchquery_oneshot = "search <query>"  # if i want collected index results to be used below periodically i.e. every 1 hour, what change do I make in my code? oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) Thanks
I am unable to see any logs in splunk from my spring boot application. I am adding my xml property file, controller file, dependency file and splunk data input screenshots to help resolving the issue... See more...
I am unable to see any logs in splunk from my spring boot application. I am adding my xml property file, controller file, dependency file and splunk data input screenshots to help resolving the issue. I am breaking my head for past couple of days and unable to find what I am missing. HEC data input config UI HEC data input edit UI Global Settings The following API is logging events in my index: curl -k http://localhost:8088/services/collector/event -H "Authorization: Splunk ***" -d '{"event": "Hello, Splunk!"}' This is my log4j2-spring.xml: <?xml version="1.0" encoding="UTF-8"?> <Configuration> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{ISO8601}} %highlight{%-5level }[%style{%t}{bright,blue}] %style{%C{10}}{bright,yellow}: %msg%n%throwable" /> </Console> <SplunkHttp name="splunkhttp" url="http://localhost:8088" token="***" host="localhost" index="customer_api_dev" type="raw" source="http-event-logs" sourcetype="log4j" messageFormat="text" disableCertificateValidation="true"> <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <!-- LOG everything at DEBUG level --> <Root level="debug"> <AppenderRef ref="console" /> <AppenderRef ref="splunkhttp" /> </Root> </Loggers> </Configuration> This is my controller: package com.example.advanceddbconcepts.controller; import com.example.advanceddbconcepts.entity.Customer; import com.example.advanceddbconcepts.entity.Order; import com.example.advanceddbconcepts.service.CustomerService; import lombok.Getter; import lombok.Setter; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.*; import java.util.List; @RestController @RequestMapping("/api/customers") public class CustomerController { Logger logger = LogManager.getLogger(CustomerController.class); private final CustomerService customerService; public CustomerController(CustomerService customerService) { this.customerService = customerService; } @PostMapping public ResponseEntity<Customer> createCustomerWithOrder(@RequestBody CustomerRequestOrder request) { Customer customer = new Customer(request.getCustomerName()); logger.info("Created a customer with name {}", request.getCustomerName()); List<Order> orders = request .getProductName() .stream() .map(Order::new) .toList(); Customer savedCustomer = customerService.createCustomerWithOrder(customer, orders); logger.info("API is successful"); return ResponseEntity.ok().body(savedCustomer); } @Getter @Setter public static class CustomerRequestOrder { private String customerName; private List<String> productName; } } I have added below dependencies in my pom.xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> <version>3.3.3</version> </dependency> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.11.8</version> </dependency> </dependencies> I am unable to see any logs in splunk after I hit the API. I am able to see logs in my local: 2024-09-02T19:37:00.629+05:30 INFO 24912 --- [nio-8080-exec-4] c.e.a.controller.CustomerController : Created a customer with name John Doe 2024-09-02T19:37:00.667+05:30 INFO 24912 --- [nio-8080-exec-4] c.e.a.controller.CustomerController : API is successful  
I try to use lookup to specify span option value in bin command with map   | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map [ search index="$Index$" _raw="*$SearchString$*" | b... See more...
I try to use lookup to specify span option value in bin command with map   | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map [ search index="$Index$" _raw="*$SearchString$*" | bin span="$Tdiv$" _time]   The previous request fails with  :  Error in 'bin' command: The value for option span (Tdiv) is invalid. When span is expressed using a sub-second unit (ds, cs, ms, us), the span value needs to be < 1 second, and 1 second must be evenly divisible by the span value. Example of values in Tdiv field : 15m 1h could you help me with this problem?
I suppose you had to authorize that website in dashboard conf files, am I right?
Hi folks.. I have an issue where I can't get an event to break right. The event looks like this   ************************************ 2024.09.03.141001 ************************************ s... See more...
Hi folks.. I have an issue where I can't get an event to break right. The event looks like this   ************************************ 2024.09.03.141001 ************************************ sqlplus -S -L swiftfilter/_REMOVED_@PPP @"long_lock_alert.sql" TAG COUNT(*) --------------- ---------- PPP_locks_count 0 TAG COUNT(*) --------------- ---------- PPP_locks_count 0 SUCCESS End Time: 2024.09.03.141006   Props looks like this:   [nk_pp_tasks] SHOULD_LINEMERGE=false LINE_BREAKER=End Time([^\*]+) NO_BINARY_CHECK=true TIME_FORMAT=%Y.%m.%d.%H%M%S TIME_PREFIX=^.+[\r\n]\s BREAK_ONLY_BEFORE_DATE = false   Outcome is this:   When the logfile is imported through 'Add Data' everything looks fine and the event has not been broken up in 3. Any idees on how to make Splunk not break up the event ?
Hi @MCW   1. How many events are returned by your <SPL search>? 2. Can you share the output of your <SPL search> that you used (e.g. as CSV)? I'd like to replicate your situation on my server. 3.... See more...
Hi @MCW   1. How many events are returned by your <SPL search>? 2. Can you share the output of your <SPL search> that you used (e.g. as CSV)? I'd like to replicate your situation on my server. 3. Do you have access to the server where Splunk is running on? If yes, can you provide the output of the following two commands? ./splunk show config mlspl | grep max_inputs ./splunk btool mlspl list --debug | grep max_inputs   Without knowing any more details, my guess is that your <SPL search> returned more events than you allow in your max_inputs setting (e.g. if your search returns 200'000 events and your max_inputs=100'000). Consequently, the number of events are downsampled by DSDL/MLTK. The resulting my_test_data.csv with 1153 lines that you see within the jupyter notebook environment is exactly this sample.  Regards, Gabriel
So everything is the same except the metrics are different, the data is different and generally we don't know what and why "doesn't work", right? But seriously. The data is important here as well as... See more...
So everything is the same except the metrics are different, the data is different and generally we don't know what and why "doesn't work", right? But seriously. The data is important here as well as what your transform looks like. Look at the Masa diagrams https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 I haven't worked with metrics much but I'd say metric schema is invoked after transforms so you need to filter your data by raw event contents.
https://regex101.com/r/Op8H3R/1
| rex max_match=0 "(?<alarm>RAISE-ALARM[^;]+;)" Regex101.com is a good place to try and learn regular expressions https://regex101.com/r/F3vySr/1