All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I read thru the Splunk docs and it seems like a UF with customized inputs.conf and output.conf file should work. If the two enterprise servers are defined in the output.conf file, than we can use the... See more...
I read thru the Splunk docs and it seems like a UF with customized inputs.conf and output.conf file should work. If the two enterprise servers are defined in the output.conf file, than we can use the inputs conf stanza to customize the destination where various log files are sent. Just wanted to confirm before accepting this solution. _TCP_ROUTING = <comma-separated list> * A comma-separated list of tcpout group names. * This setting lets you selectively forward data to one or more specific indexers. * Specify the tcpout group that the forwarder uses when forwarding the data. The tcpout group names are defined in outputs.conf with [tcpout:<tcpout_group_name>]. * To forward data to all tcpout group names that have been defined in outputs.conf, set to '*' (asterisk). * To forward data from the "_internal" index, you must explicitly set '_TCP_ROUTING' to either "*" or a specific splunktcp target group. * Default: The groups specified in 'defaultGroup' in [tcpout] stanza in the outputs.conf file   
Hi @James.Gardner, I found this Docs page that shows Supported Environments: https://docs.appdynamics.com/appd/24.x/24.8/en/application-monitoring/app-server-agents-supported-environments
My company had a Splunk 8.0 server that hadn't been upgraded in years.  There was a lot of abandoned testing on it over the years so cleanup and multiple upgrades to get to 9.2.1 was going to be a bi... See more...
My company had a Splunk 8.0 server that hadn't been upgraded in years.  There was a lot of abandoned testing on it over the years so cleanup and multiple upgrades to get to 9.2.1 was going to be a big undertaking.  I decided to stand up a new server with 9.2.1 and migrate over the data.  We went live on it a few weeks ago.  We've had no issues with ingesting data or searches or alerts.  However the Indexes page under Settings shows 0 on all indexes for Current Size and Event Count.  Earliest Event and Latest Event are all blank.  This is happening on all the indexes, both internal and non-internal.  We noticed this before go live and talked to support.  They said it was because of the trial license we were using and would go away when we put our real license on it during go live. We did the license switch during go live but we're still seeing 0 for everything.  We can search on these indexes so there is data in them.  I don't see any errors in the logs when we go to the indexes page. If I go to Indexes and Volumes: Instances in the Monitoring console under snapshots it shows my bucket count and space used on the file system but index usage is 0 for everything.  Under historical it does show the index sizes.  
you can use this search to look for any lookup edits that were logged to the _internal log index=_internal "Lookup edited*" sourcetype=lookup_editor_rest_handler | table _time namespace lookup_file... See more...
you can use this search to look for any lookup edits that were logged to the _internal log index=_internal "Lookup edited*" sourcetype=lookup_editor_rest_handler | table _time namespace lookup_file user It will output the time it was saved, the app/namespace it was in, the filename and the user that saved it
In the meantime Splunk support confirmed the issue and a Escalation Manager is involved. Hope we get a fixed version soon, but currently we have no statement on this. You may want to open also a cas... See more...
In the meantime Splunk support confirmed the issue and a Escalation Manager is involved. Hope we get a fixed version soon, but currently we have no statement on this. You may want to open also a case, refer to  #3518811.
wow. my problem was this snippet works ONLY when i put "T" in the timeformat. | eval _time=strptime(time2, "%Y-%m-%dT%H:%M:%S.%3N")
Hello everyone, i found the solution with my team: In addition to changing the output.conf by inserting the appropriate sourcetype. the moment the header is still not removed we followed this proc... See more...
Hello everyone, i found the solution with my team: In addition to changing the output.conf by inserting the appropriate sourcetype. the moment the header is still not removed we followed this procedure: by going to change the following template definition of the rsyslog file on all UFs, removing %TIMESTAMP% %HOSTNAME% (the one that appears in the header) within the configuration. bye, G.
Yes, but as I understand, that's not the issue. If you copy the same contents several times over into a single file and upload it to Splunk via "add data" dialog with the settings @jesperbassoe provi... See more...
Yes, but as I understand, that's not the issue. If you copy the same contents several times over into a single file and upload it to Splunk via "add data" dialog with the settings @jesperbassoe provided, it does get properly split into separate events. True, the final timestamp is getting discarded as it is treated as a linebreaker but apart from that the stream is properly broken into events. The screenshot however shows the event butchered into separate parts which doesn't really match the LINE_BREAKER definition. So the questions are: 1) Where are the settings defined (on which components; and are there any other conflicting and possibly overriding settings)? 2) How is the file ingested (most probably by monitor input on an UF)?
The existing props are discarding the End Time value because of the LINE_BREAKER setting.  LINE_BREAKER always throws out the text that matches the first capture group. Try these settings. [nk_pp_t... See more...
The existing props are discarding the End Time value because of the LINE_BREAKER setting.  LINE_BREAKER always throws out the text that matches the first capture group. Try these settings. [nk_pp_tasks] SHOULD_LINEMERGE=false LINE_BREAKER=End Time:[^\*]+?() NO_BINARY_CHECK=true TIME_FORMAT=%Y.%m.%d.%H%M%S TIME_PREFIX=\*\*+ BREAK_ONLY_BEFORE_DATE = false  
There is no such thing as "summary index" as a separate type of index Anyway, are you sure the user you're running your search with can use the collect command?
OK. This is a search from a particular accelerated datamodel. So for this to work three things must be configured properly. 1) You must be getting proper logs from the firewall. 2) You must have th... See more...
OK. This is a search from a particular accelerated datamodel. So for this to work three things must be configured properly. 1) You must be getting proper logs from the firewall. 2) You must have the datamodel configured properly (I suppose you either have to ingest firewall data to a specific index or have to reconfigure the datamodel to cover the index you're ingesting your fw events into). 3) And finally you must have datamodel acceleration enabled for that datamodel. So these are three things that must happen before that dashboard can be populated with results. BTW, you pointed to a SOAR app as relevant products for this thread. I suppose you meant the Fortinet FortiGate App - https://splunkbase.splunk.com/app/2800 - it does have a description section which seems to tell how to configure it (but I'd be cautious about the instructions for both this app and an accompanying add-on because it's a third-party add-on and vendors don't always know Splunk well and some of their ideas can be far from the best practice).
Summary indexes are no different from other indexes so the code you use to access one should work for the other. How does the existing code fail?  What error messages do you see?  Have you checked s... See more...
Summary indexes are no different from other indexes so the code you use to access one should work for the other. How does the existing code fail?  What error messages do you see?  Have you checked search.log? It's possible the query is being caught by the "risky code" trap because the collect command is considered a risky one.  To avoid that, add the following lines to a command.conf file (not system/default) [collect] is_risky = false
Hi Picklerick,  So the Forti app has a n event dashboard to view the CPU and Memory: But when you open the search you get no results: |tstats summariesonly=true last(log.system_event.syste... See more...
Hi Picklerick,  So the Forti app has a n event dashboard to view the CPU and Memory: But when you open the search you get no results: |tstats summariesonly=true last(log.system_event.system.cpu) AS cpus FROM datamodel=ftnt_fos WHERE nodename="log.system_event.system" log.devname="*" log.vendor_action=perf-stats groupby _time log.devname | timechart values(cpus) by log.devname   New to Splunk so just wondering if there is something here i need to mod...   cheers  
1. You don't "monitor the CPU" with Splunk as in "use search to interactively connect to the device and check its parameters". You can search the data that has been ingested prior to search. So... 2... See more...
1. You don't "monitor the CPU" with Splunk as in "use search to interactively connect to the device and check its parameters". You can search the data that has been ingested prior to search. So... 2. Do you have any data fron your firewall ingested? Do you know where it is? Can you search it at all? Do you know _what_ data is ingested from the firewall? 3. What does "doesn't seem to work" mean? What are you doing (especially - what search are you running) and what are the results?
Hi Guys,   Has anyone done a search were you can monitor the CPU on the Fortinet Firewalls? Its on the App but doesn't seem to work?   Cheers Ahmed
hi @PaulPanther  This is the screen shot of Job --> Inspect Job. Please I need help on this asap.  
I think I've seen this somewhere. For some reason map sometimes behaves differently if the search is specified in square brackets and differently if it's passed as parameter to the search= option. T... See more...
I think I've seen this somewhere. For some reason map sometimes behaves differently if the search is specified in square brackets and differently if it's passed as parameter to the search= option. Try the latter form (remembering about proper escaping) | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map search="search index=\"$Index$\" _raw=\"*$SearchString$*\" | bin span=\"$Tdiv$\" _time"  
Hello,  Can splunk python sdk be used along with a summary index? How? I wish to schedule periodic querying and extracting the data from Splunk for which I usually used the SDK like this and it wor... See more...
Hello,  Can splunk python sdk be used along with a summary index? How? I wish to schedule periodic querying and extracting the data from Splunk for which I usually used the SDK like this and it works for 1 time run as I removed my "collect index ..." code from my query - service = client.connect( host=HOST,  port=PORT,  username=USERNAME,  password=PASSWORD) kwargs_oneshot = {"earliest_time": "-1h",   "latest_time": "now",   "output_mode": 'json', "count" : 100} searchquery_oneshot = "search <query>"  # if i want collected index results to be used below periodically i.e. every 1 hour, what change do I make in my code? oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) Thanks
I am unable to see any logs in splunk from my spring boot application. I am adding my xml property file, controller file, dependency file and splunk data input screenshots to help resolving the issue... See more...
I am unable to see any logs in splunk from my spring boot application. I am adding my xml property file, controller file, dependency file and splunk data input screenshots to help resolving the issue. I am breaking my head for past couple of days and unable to find what I am missing. HEC data input config UI HEC data input edit UI Global Settings The following API is logging events in my index: curl -k http://localhost:8088/services/collector/event -H "Authorization: Splunk ***" -d '{"event": "Hello, Splunk!"}' This is my log4j2-spring.xml: <?xml version="1.0" encoding="UTF-8"?> <Configuration> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{ISO8601}} %highlight{%-5level }[%style{%t}{bright,blue}] %style{%C{10}}{bright,yellow}: %msg%n%throwable" /> </Console> <SplunkHttp name="splunkhttp" url="http://localhost:8088" token="***" host="localhost" index="customer_api_dev" type="raw" source="http-event-logs" sourcetype="log4j" messageFormat="text" disableCertificateValidation="true"> <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <!-- LOG everything at DEBUG level --> <Root level="debug"> <AppenderRef ref="console" /> <AppenderRef ref="splunkhttp" /> </Root> </Loggers> </Configuration> This is my controller: package com.example.advanceddbconcepts.controller; import com.example.advanceddbconcepts.entity.Customer; import com.example.advanceddbconcepts.entity.Order; import com.example.advanceddbconcepts.service.CustomerService; import lombok.Getter; import lombok.Setter; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.*; import java.util.List; @RestController @RequestMapping("/api/customers") public class CustomerController { Logger logger = LogManager.getLogger(CustomerController.class); private final CustomerService customerService; public CustomerController(CustomerService customerService) { this.customerService = customerService; } @PostMapping public ResponseEntity<Customer> createCustomerWithOrder(@RequestBody CustomerRequestOrder request) { Customer customer = new Customer(request.getCustomerName()); logger.info("Created a customer with name {}", request.getCustomerName()); List<Order> orders = request .getProductName() .stream() .map(Order::new) .toList(); Customer savedCustomer = customerService.createCustomerWithOrder(customer, orders); logger.info("API is successful"); return ResponseEntity.ok().body(savedCustomer); } @Getter @Setter public static class CustomerRequestOrder { private String customerName; private List<String> productName; } } I have added below dependencies in my pom.xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> <version>3.3.3</version> </dependency> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.11.8</version> </dependency> </dependencies> I am unable to see any logs in splunk after I hit the API. I am able to see logs in my local: 2024-09-02T19:37:00.629+05:30 INFO 24912 --- [nio-8080-exec-4] c.e.a.controller.CustomerController : Created a customer with name John Doe 2024-09-02T19:37:00.667+05:30 INFO 24912 --- [nio-8080-exec-4] c.e.a.controller.CustomerController : API is successful  
I try to use lookup to specify span option value in bin command with map   | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map [ search index="$Index$" _raw="*$SearchString$*" | b... See more...
I try to use lookup to specify span option value in bin command with map   | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map [ search index="$Index$" _raw="*$SearchString$*" | bin span="$Tdiv$" _time]   The previous request fails with  :  Error in 'bin' command: The value for option span (Tdiv) is invalid. When span is expressed using a sub-second unit (ds, cs, ms, us), the span value needs to be < 1 second, and 1 second must be evenly divisible by the span value. Example of values in Tdiv field : 15m 1h could you help me with this problem?