All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is a thread from so long ago and is about a long forgotten version. Nowadays collect is much more flexible, especially if you're using output_format=hec
Right _TCP_ROUTING, not _TCP_ROUTE - my typo. But again - you can't selectively forward some part of data from a particular input to a specific output. It's all or nothing. If you can live witn it th... See more...
Right _TCP_ROUTING, not _TCP_ROUTE - my typo. But again - you can't selectively forward some part of data from a particular input to a specific output. It's all or nothing. If you can live witn it that's... something I'd still test in lab before pushing to prod.
A little background.  Our organization set up hundreds of service templates when we rolled out ITSI.   We're trying to clean up unwanted KPI's in these services.  I have one KPI that I want off of ... See more...
A little background.  Our organization set up hundreds of service templates when we rolled out ITSI.   We're trying to clean up unwanted KPI's in these services.  I have one KPI that I want off of all the service templates.   The manual process of navigating  1) Configuration 2) Service Monitoring 3) Service Templates 4) Search for a service 5) edit 6) click the X on the unwanted KPI 7) Save the template Propagate the change   Is taking forever to do in bulk.     Is there a faster way?
Thanks Ryan, Yeah I've seen that page and from what I can see in that page is my appd should works with the version of java I'm using
I read thru the Splunk docs and it seems like a UF with customized inputs.conf and output.conf file should work. If the two enterprise servers are defined in the output.conf file, than we can use the... See more...
I read thru the Splunk docs and it seems like a UF with customized inputs.conf and output.conf file should work. If the two enterprise servers are defined in the output.conf file, than we can use the inputs conf stanza to customize the destination where various log files are sent. Just wanted to confirm before accepting this solution. _TCP_ROUTING = <comma-separated list> * A comma-separated list of tcpout group names. * This setting lets you selectively forward data to one or more specific indexers. * Specify the tcpout group that the forwarder uses when forwarding the data. The tcpout group names are defined in outputs.conf with [tcpout:<tcpout_group_name>]. * To forward data to all tcpout group names that have been defined in outputs.conf, set to '*' (asterisk). * To forward data from the "_internal" index, you must explicitly set '_TCP_ROUTING' to either "*" or a specific splunktcp target group. * Default: The groups specified in 'defaultGroup' in [tcpout] stanza in the outputs.conf file   
Hi @James.Gardner, I found this Docs page that shows Supported Environments: https://docs.appdynamics.com/appd/24.x/24.8/en/application-monitoring/app-server-agents-supported-environments
My company had a Splunk 8.0 server that hadn't been upgraded in years.  There was a lot of abandoned testing on it over the years so cleanup and multiple upgrades to get to 9.2.1 was going to be a bi... See more...
My company had a Splunk 8.0 server that hadn't been upgraded in years.  There was a lot of abandoned testing on it over the years so cleanup and multiple upgrades to get to 9.2.1 was going to be a big undertaking.  I decided to stand up a new server with 9.2.1 and migrate over the data.  We went live on it a few weeks ago.  We've had no issues with ingesting data or searches or alerts.  However the Indexes page under Settings shows 0 on all indexes for Current Size and Event Count.  Earliest Event and Latest Event are all blank.  This is happening on all the indexes, both internal and non-internal.  We noticed this before go live and talked to support.  They said it was because of the trial license we were using and would go away when we put our real license on it during go live. We did the license switch during go live but we're still seeing 0 for everything.  We can search on these indexes so there is data in them.  I don't see any errors in the logs when we go to the indexes page. If I go to Indexes and Volumes: Instances in the Monitoring console under snapshots it shows my bucket count and space used on the file system but index usage is 0 for everything.  Under historical it does show the index sizes.  
you can use this search to look for any lookup edits that were logged to the _internal log index=_internal "Lookup edited*" sourcetype=lookup_editor_rest_handler | table _time namespace lookup_file... See more...
you can use this search to look for any lookup edits that were logged to the _internal log index=_internal "Lookup edited*" sourcetype=lookup_editor_rest_handler | table _time namespace lookup_file user It will output the time it was saved, the app/namespace it was in, the filename and the user that saved it
In the meantime Splunk support confirmed the issue and a Escalation Manager is involved. Hope we get a fixed version soon, but currently we have no statement on this. You may want to open also a cas... See more...
In the meantime Splunk support confirmed the issue and a Escalation Manager is involved. Hope we get a fixed version soon, but currently we have no statement on this. You may want to open also a case, refer to  #3518811.
wow. my problem was this snippet works ONLY when i put "T" in the timeformat. | eval _time=strptime(time2, "%Y-%m-%dT%H:%M:%S.%3N")
Hello everyone, i found the solution with my team: In addition to changing the output.conf by inserting the appropriate sourcetype. the moment the header is still not removed we followed this proc... See more...
Hello everyone, i found the solution with my team: In addition to changing the output.conf by inserting the appropriate sourcetype. the moment the header is still not removed we followed this procedure: by going to change the following template definition of the rsyslog file on all UFs, removing %TIMESTAMP% %HOSTNAME% (the one that appears in the header) within the configuration. bye, G.
Yes, but as I understand, that's not the issue. If you copy the same contents several times over into a single file and upload it to Splunk via "add data" dialog with the settings @jesperbassoe provi... See more...
Yes, but as I understand, that's not the issue. If you copy the same contents several times over into a single file and upload it to Splunk via "add data" dialog with the settings @jesperbassoe provided, it does get properly split into separate events. True, the final timestamp is getting discarded as it is treated as a linebreaker but apart from that the stream is properly broken into events. The screenshot however shows the event butchered into separate parts which doesn't really match the LINE_BREAKER definition. So the questions are: 1) Where are the settings defined (on which components; and are there any other conflicting and possibly overriding settings)? 2) How is the file ingested (most probably by monitor input on an UF)?
The existing props are discarding the End Time value because of the LINE_BREAKER setting.  LINE_BREAKER always throws out the text that matches the first capture group. Try these settings. [nk_pp_t... See more...
The existing props are discarding the End Time value because of the LINE_BREAKER setting.  LINE_BREAKER always throws out the text that matches the first capture group. Try these settings. [nk_pp_tasks] SHOULD_LINEMERGE=false LINE_BREAKER=End Time:[^\*]+?() NO_BINARY_CHECK=true TIME_FORMAT=%Y.%m.%d.%H%M%S TIME_PREFIX=\*\*+ BREAK_ONLY_BEFORE_DATE = false  
There is no such thing as "summary index" as a separate type of index Anyway, are you sure the user you're running your search with can use the collect command?
OK. This is a search from a particular accelerated datamodel. So for this to work three things must be configured properly. 1) You must be getting proper logs from the firewall. 2) You must have th... See more...
OK. This is a search from a particular accelerated datamodel. So for this to work three things must be configured properly. 1) You must be getting proper logs from the firewall. 2) You must have the datamodel configured properly (I suppose you either have to ingest firewall data to a specific index or have to reconfigure the datamodel to cover the index you're ingesting your fw events into). 3) And finally you must have datamodel acceleration enabled for that datamodel. So these are three things that must happen before that dashboard can be populated with results. BTW, you pointed to a SOAR app as relevant products for this thread. I suppose you meant the Fortinet FortiGate App - https://splunkbase.splunk.com/app/2800 - it does have a description section which seems to tell how to configure it (but I'd be cautious about the instructions for both this app and an accompanying add-on because it's a third-party add-on and vendors don't always know Splunk well and some of their ideas can be far from the best practice).
Summary indexes are no different from other indexes so the code you use to access one should work for the other. How does the existing code fail?  What error messages do you see?  Have you checked s... See more...
Summary indexes are no different from other indexes so the code you use to access one should work for the other. How does the existing code fail?  What error messages do you see?  Have you checked search.log? It's possible the query is being caught by the "risky code" trap because the collect command is considered a risky one.  To avoid that, add the following lines to a command.conf file (not system/default) [collect] is_risky = false
Hi Picklerick,  So the Forti app has a n event dashboard to view the CPU and Memory: But when you open the search you get no results: |tstats summariesonly=true last(log.system_event.syste... See more...
Hi Picklerick,  So the Forti app has a n event dashboard to view the CPU and Memory: But when you open the search you get no results: |tstats summariesonly=true last(log.system_event.system.cpu) AS cpus FROM datamodel=ftnt_fos WHERE nodename="log.system_event.system" log.devname="*" log.vendor_action=perf-stats groupby _time log.devname | timechart values(cpus) by log.devname   New to Splunk so just wondering if there is something here i need to mod...   cheers  
1. You don't "monitor the CPU" with Splunk as in "use search to interactively connect to the device and check its parameters". You can search the data that has been ingested prior to search. So... 2... See more...
1. You don't "monitor the CPU" with Splunk as in "use search to interactively connect to the device and check its parameters". You can search the data that has been ingested prior to search. So... 2. Do you have any data fron your firewall ingested? Do you know where it is? Can you search it at all? Do you know _what_ data is ingested from the firewall? 3. What does "doesn't seem to work" mean? What are you doing (especially - what search are you running) and what are the results?
Hi Guys,   Has anyone done a search were you can monitor the CPU on the Fortinet Firewalls? Its on the App but doesn't seem to work?   Cheers Ahmed
hi @PaulPanther  This is the screen shot of Job --> Inspect Job. Please I need help on this asap.