you would use a macro at times when you are writing the same bit of SPL code repeatedly in multiple situations. Saved searches are used for populating summery indexes, creating correlated searches in the Enterprise Security App., populating dashboards, setting up alerts, scheduling reports etc. Macros and saved searches are very different as well.
... View more
A data model is definitely not a macro. A macro operates like macros or functions do in other programs. A datamodel is a knowledge object based on a base search that produces a set of search results (such as tag = network tag = communicate) The datamodel provides a framework for working with the dataset that the base search creates. A data model is usually designed to reference an aggregate of similar sourcetypes such as firewall data and assigns the same field extractions etc to all of the contained sourcetypes, no matter what type of device it comes from (cisco, juniper etc.) These datamodels ride on top of various Technology AddOns which format similar event data to be CIM compliant, so that the event data will populate the relevant datamodel. Newer Splunk Apps, Such as Splunk for Enterprise Security depend on datamodels for their operations.
... View more
I am working on this myself, still getting failures after configuring proxy info. Does proxy server field need to be populated in http:\ format, or does just the ip address of the proxy suffice in that field?
... View more
Thank you, I was just banging my head on a data source with TZ command, I usually match on sourcetype and typically it works, in this case it did not (I think due to the fact that I was overriding the sourcetype field) matching this field on source did the trick for me.
... View more
Does this example monitor stanza work? I'm trying to do something very similar:
[monitor:///var/log/syslog/sw]
and this didn't pull in any data.
... View more
I have found that dbx 1.1.6 with java 1.7 seems to be the most stable for the use cases I run into. The main issue I run into when using the newer v2 vesion of svc is that I can't configure it to tail a log correctly. But version 1.1.6. still does that very well.
... View more
I'm troubleshooting the same isssue. In my case they are coming from SA-cisco-asa, but when searching through AV data, no fields in the search results should trigger the search results. I tried editing permissions, etc. I thought this was due to the fact that I'm using a limited user that only has access to the AV index and nothing else. I do not see these issues arise with an admin user that has more rights in my Splunk installation, I only witness this with a user account that is limited to the one search index.
... View more
The easiest one to do a POC with would probably be one of the cisco apps. I only say that because you could ingest cisco syslog realtively painlessly. A lot of the operating system apps work best when you are using a universal forwarder installed on client servers to collect the data. This wouldn't be a problem if you have a test lab to use. The other alternative is to research using the data generators to create test data for different platforms to demo splunk's search and visualization capabilities.
... View more
The easiest way to monitory splunk operations in general is to install the SoS app from Splunk base and use the various dashboards to help provide insight into the splunkd logs.
... View more
Stanzas in props.conf are typically tied to sourcetype. So, once you assign an event to a source type, you would be able to use props.conf to write a field extraction. If you are just renaming a field by using a FIELDALIAS, you can configure it all in props.conf.
If you are creating field names via regex, or working with key/value pairs and need to define a header row, you will also need to use transforms.conf as well.
... View more
What format are your squid logs? More than likely, the regex isn't matching. This app was designed to work with a custom squid log format, which is shown in the readme file included with the app. That custom format is recommended to provide all of the enterprise security fields. Please submit a small sample of your squid logs and I can take a look.
... View more
I just upgraded to splunk 5.0.3 and I do have one instance of this error with a time stamp of 10 minutes ago and I performed the upgrade well over an hour ago. I'll chase it down, but I wouldn't say the issue is resolved with the most recent upgrade.
... View more
I did just figure out how to use the Juniper Device as a reverse proxy. In addition to configuring the reverse proxy, I also had to create a custom headers rewriting policy that allowed custom headers to be written. If anyone else needs help with this configuration, give me a shout.
... View more
Have you tried using a Juniper SA device or a MAG as the reverse proxy? I am having a similar issue. Could someone assist with using the Juniper devices as the reverse proxy?
... View more
Kristian, I just wanted to say thanks for the tip. I've been able to successfully use this method to do field extractions in some xml logs I'm working with.
... View more
Have you run tcpdump or some other utility to verify that the last Universal Forwarder listening 20981 is actually forwarding the syslog data to your syslog endpoint?
... View more
Can you describe your splunk deployment? Is everything installed on one server? Do you have a distributed deployment? Are you running splunk on linux or windows servers? Do you have a sample of the splunkd logs after the upgrade showing specific errors?
... View more
In your macros.conf file for your app you could have something defined as simple as:
[firewall_traffic]
args =
definition = tag=firewall tag=communicate
Then in a saved search use it like this:
search = firewall_traffic | top 10 classification
So, in the application that you are writing, include a macros.conf file that defines a macro for the search with the _raw in it. Then, you can call the macro by name in your searches without calling the _raw field by name. I would reference the macros.conf example file, which I need to spend a little more time with myself.
... View more
You can restart the indexers one by one when you make a change. When you assign the app that applies your settings to the indexers assign the app such that it doesn't restart splunkd. You can do this in the serverclass.conf by creating a server class for your indexers and setting restartSplunkd = false.
You can then create an app who's only job is to restart splunkd. That restart app is just an empty app. Then in the serverclass.conf file created a separate server class for each of your indexers with restartSplunkd = true. Then, when you deploy your apps to your indexers, splunkd won't restart until you assign the restart app to the individual indexer class. Once you are done with the restarts, unassign the restart app until next time.
The other saving grace you may have is that the splunk servers will check into the deployment servers at slightly different points of their checkin interval, so both indexers probably won't restart at the same time anyway.
... View more
Have you tried creating a macro of the search that includes the _raw field, that way you can just specify the macro in the search bar instead of using a search with the _raw field in it?
... View more
This is a quote from the props.conf example file that ships with splunk:
"The following are example props.conf configurations. Configure properties for your data. # # To use one or more of these configurations, copy the configuration block into # props.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable configurations."
I think this may only apply to the props.conf in the system/local directory on the server. I use a deployment server and deploy apps to all of my splunk instances. I have not had any restart issues on indexers when deploying new apps so far.
... View more
You definitely want to ingest that data into its own index, then you can limit the users who have rights to view that index. An index is the smallest unit that you can apply an ACL to. Are you using local splunk logins or are you using ldap authentication? Basically, you create roles within splunk and either map users to those roles within splunk or you can map ldap groups to those roles and control the group membership in a directory service like Microsoft Active Directory.
... View more
you will need to configure an inputs.conf file on the forwarder to monitor the file location of your log and send it to your splunk server.
in your inputs.conf file on the universal forwarder you would have a stanza something like this:
[monitor:///var/log/logfilename]
sourcetype = logfile
disabled = 0
In this stanza, you basically want to specify the file location of the log file you are monitoring and give it a source type. You can name the sourcetype anything you want, just name it something that makes sense for your environment.
You can deploy this inputs.conf file in a couple of different ways. If you are manually configuring everything, you could locate this file in the /etc/system/local area under the universal forwarder file path. If you wan't more granular control you could deploy this configuration as its own app to the universal forwarder in which case it would live under the /etc/apps/app-name/ area under the universal forwarder file path. You can name the app anything you like, it is good to have a functional naming scheme so you know what your apps do just by looking at them. This gets into a whole other area of splunk configurations. A good guide to look through is the splunk "Getting Data in Correctly" guide.
... View more
In our distributed deployment, I use a deployment app who's only task is to create indexes is and I have it assigned to just my indexer class. This makes it really simple to create custom indexes if I need them for splunkbase apps or any other app I would create to collect new data. I use a lot of custom indexes because we have many different classes of users who shouldn't see all of the log data, just what is pertanent to them. So, using custom indexes for my various data sources allows me to restrict users' views with ACLs.
... View more