Activity Feed
- Got Karma for Re: What variables can you use in email subject?. 05-15-2024 11:36 AM
- Got Karma for Re: Searching local indexes on a search head?. 04-21-2023 01:02 PM
- Karma Where to add custom artifact types to use in the workbench? for jrodriguezap. 01-12-2023 02:57 AM
- Posted Re: How do I validate that boot-start was indeed enabled? on Monitoring Splunk. 11-04-2022 05:19 AM
- Posted Re: How can I verify if the boot-start is already enabled? on Splunk Search. 11-04-2022 05:17 AM
- Tagged Re: How can I verify if the boot-start is already enabled? on Splunk Search. 11-04-2022 05:17 AM
- Karma Re: How do I validate that boot-start was indeed enabled? for phil_wong. 11-04-2022 05:05 AM
- Got Karma for Re: Trying to break multiline event into separate events. 08-28-2022 01:27 AM
- Got Karma for Re: Whats the splunk equivalent of SQL IN clause. 10-11-2021 02:00 PM
- Posted Re: Difference between using xmlkv and KV_MODE=xml on Knowledge Management. 07-15-2021 09:37 AM
- Posted Re: Change Splunk mongodb to use wiredtiger storage engine on Knowledge Management. 01-03-2021 11:30 AM
- Karma Re: How to set loading order for panels? for woodcock. 11-09-2020 02:42 AM
- Got Karma for Re: How to calculate moving average and graph it as an overlay on a bar chart of actual values?. 08-11-2020 02:38 AM
- Got Karma for Re: dedup and unique difference. 07-02-2020 12:44 PM
- Got Karma for Re: Splunk Search for Total Values. 06-10-2020 05:14 AM
- Posted Re: need to extract Workweek from date on Splunk Search. 06-10-2020 12:52 AM
- Posted Re: need to extract Workweek from date on Splunk Search. 06-10-2020 12:20 AM
- Posted Re: Splunk Search for Total Values on Splunk Search. 06-09-2020 11:52 PM
- Got Karma for Re: Remove host name in Account_Name field. 06-05-2020 12:51 AM
- Karma Re: How do I add to Splunk Enterprise Security license? for Richfez. 06-05-2020 12:49 AM
Topics I've Started
No posts to display.
11-04-2022
05:19 AM
This command will check both init-d and systemd on unix. Without the sudo should work on windows. sudo ./splunk display boot-start
... View more
11-04-2022
05:17 AM
This command will check both init-d and systemd on unix. Without the sudo should work on windows. sudo ./splunk display boot-start
... View more
- Tags:
- The command
07-15-2021
09:37 AM
The underlying code for both is the same so the performance won't be much different. The difference is when do you want these fields extracted and when don't you. KV_MODE=xml will be always done for that sourcetype. xmlkv will only be done when you use it in a search string. So if you always want all of the fields to be extracted use KV_MODE but if you only want the fields to be occasionally extracted use xmlkv in your search string. If you only want one or two fields from a big xml file, it might be better to extract them using normal regex extraction Another use for xmlkv is when not all of your event is clean xml. KV_MODE would fail and not give you the fields. In a search, you can use an eval or rex to extract and clean the xml portion and then run xmlkv on that.
... View more
01-03-2021
11:30 AM
There is good news. Since version 8.1, this is not only supported but encouraged. See https://docs.splunk.com/Documentation/Splunk/8.1.1/Admin/MigrateKVstore for instructions.
... View more
06-10-2020
12:52 AM
I define WorkWeek as the week number in the year, so week 1 is the first week in Jan and 52 is the last full week in Dec.
... View more
06-10-2020
12:20 AM
You can use strftime to create the field. | makeresults | eval WorkWeek = strftime(_time,"%U")
... View more
06-09-2020
11:52 PM
1 Karma
I would add this append pipe immediately before your table command. | append [stats values(Month_Year) as Month_Year sum(TotalCount) as TotalCount avg(SLA) as SLA sum(WorkingDays) as WorkingDays avg(DailyCount) as DailyCount | eval Analyst = "Total" ]
... View more
06-02-2020
12:24 AM
1 Karma
There are two ways to fix this. You could change the way the fields are extracted or you can filter them out at search time.
As these are representing machine accounts that may need to be traced at a later date, I would prefer not to stop splunk extracting them.
My preferred way of doing this is with an eval function in search.
| eval Account_Name=mvfilter(match(Account_Name, ".+[^$]$") )
This is using mvfilter to remove fields that don't match a regex. The regex is looking for .* meaning anything followed by [^$] meaning anything that is not a $ symbol then $ as an anchor meaning that must be the end of the field value.
... View more
06-01-2020
11:51 PM
The url you give in your question prompts the browser to look for that file on the users h drive and not on the splunk server. For security reasons you can not just point to any folder on the splunk server.
There is a way round this by uploading a file inside an app and then using its correct link. Let's say you have created an app called documentation. Go to manage apps > Click Edit Properties for your app. Upload your file with Upload asset and save
Once it is there, you can link to it with. "/static/app/documentation/MyFile.pdf" so you can make your html..
<a href="/static/app/documentation/MyFile.pdf" target="_blank">help with data</a>
... View more
04-20-2020
12:40 AM
I have been informed this endpoint (/static/app-packages/APPNAME.spl) is now deprecated and may disappear in future versions. If you are relying on this, now is the time to look for an alternative.
... View more
04-17-2018
12:34 AM
1 Karma
This is a complex issue but there is a way round it although is is a bit manual to set up.
First I hope you are following best practices and are using an external syslog receiver (kiwi syslog, rsyslog, syslogng etc.) and not taking it directly into splunk. If not, please read this http://wiki.splunk.com/index.php?title=Community:Best_Practice_For_Configuring_Syslog_Input&r=searchtip
Set up your syslog receiver to save each host to a separate folder like below.
log/syslog
|________server1
| |______server.log
|
|________server2
| |______server.log
...
Then setup your inputs.conf to collect it. Something like this.
[monitor:///log/syslog]
index = mysyslogIndex
host_segment = 3
sourcetype = syslog
The secret to time zones is the props.conf on the forwarder which should look something like this.
[source::/log/syslog/server1/*]
TZ = US/EASTERN
[source::/log/syslog/server2/*]
TZ = GMT
[source::/log/syslog/server3/*]
TZ = UTC +1
You will have to set a stanza with a TZ for every server that isn't in the same TZ as your server, but this is a one off setup and should work going forward. Remember both of these files belong on the forwarder.
... View more
04-15-2018
11:30 AM
These are very different commands and I can't see where the confusion is.
The search command has two uses. If it is the first command in a search request, it pulls data from the indexer that matches the terms you give it. In this case the word search is optional. If it is a subsequent command, it is a filter and any events or rows that do not match the terms get dropped.
Addinfo does not add new events or filter existing ones. It adds 4 fields about the search to every event. ( info_min_time, info_max_time, info_sid and info_search_time) This is normally used as a step in summary indexing.
See docs on addinfo for more detail or this explanation of summary indexing
... View more
04-15-2018
11:09 AM
4 Karma
The uniq command removes duplicates if the whole event or row of a table are the same. It takes no fields or options as everything is checked. It is an ideal command if you have duplicate data.
See docs on uniq for more detail.
The dedup command looks only at the fields you tell it to. So if I say "| dedup host", it only looks at the host field and keeps the first from each host. You can specify multiple fields and has options like consecutive (only remove events with duplicate combinations of values that are in consecutive rows.) or keepempty (also keep events that do not have the requested field).
See docs on dedup for more detail
... View more
02-16-2018
05:46 AM
1 Karma
As reported in other answers you should fix this in your props.conf at index time but if the data is already indexed you can break it as follows. Note you need the new line character after delim=" and can type it using shift-enter.
|eval raw=_raw
|makemv delim="
" raw
| mvexpand raw
| eval _raw=raw
Any other fields the original event had will now be in all part events. i.e. If line3 had a user field, all 3 lines will have that user field. So you may want to delete them and re-extract for the lines with something like this.
| fields - user
| rex "user=\"(?<user>[^\"]+)\""
... View more
01-25-2018
11:41 AM
I would question why you want it done at index time. It rarely makes a performance improvement (In fact more often makes things worse) and takes more disk space.
But if you are sure you want to try this on your development system use the above linked answer but replace the REPORT-xyz in props.conf with TRANSFORMS-xyz and add WRITE_META = true to the transforms.conf stanza.
... View more
10-14-2016
06:47 AM
This is an old answer and only works prior to V7.1. For all other versions read cbreshears_splunk Answer
Yes. Just rename it with a .bak extension, restart and use the default password of "changeme"
... View more
10-04-2016
11:06 AM
1 Karma
Advanced xml is now depreciated. HTML is now possible in simple xml as stated above
... View more
09-14-2016
08:21 AM
2 Karma
Actually that isn't complete. This would show 1234.56 as 1.234.56 when it is expected to see 1.234,56
Add an extra 2 lines to fix it.
| eval count=tostring(count,"commas")
| rex field=count mode=sed "s/\./#/g"
| rex field=count mode=sed "s/,/./g"
| rex field=count mode=sed "s/#/,/g"
make it into a macro if you need it a lot.
... View more
06-29-2016
11:16 AM
My first question is why are you trying to create a new TA for access combined? It is in our “ List of pretrained source types ” that is defined in the file system/default/props.conf Add-On Builder is detecting this and preventing you because of the layering of apps and the rules of Precedence. If config is in a location with a higher precedence, your new TA will not be able to overrule it.
If your data differs from access combined, it should have a different sourcetype name.
If it is the same but you want to add a couple of field extractions you can just create a new app and build those extractions whilst in it.
If you want to normalise it to a data model, (which one/’s?) then it is a little more complex. Best practice is to create new apps on a development system where you can move any existing config to your new app without risking making a mistake in production. Only move to prod when you are happy.
If you have to do this in production, I would :-
First create a new sourcetype I called it ac2
Under advanced delete the category line and replace it with
REPORT-access = access-extractions
Then click next
Upload sample data and continue as normal.
Once you have built your app, edit the app’s props.conf from the command line. Replacing ac2 with access_combined
reboot splunk for it to take effect.
But this takes away most of the advantage of the Add-On Builder being GUI.
... View more
05-22-2016
03:38 AM
1 Karma
The time format is not fixed in log4j so spunk can not assume one format. If your company has standardised on a date format, it would be good practice to add TIME_FORMAT to save splunk having to test all possibilities.
In general It is good practice to use or clone splunk pre trained source types and as always the more you tell splunk, the less it has to "guess" which reduces indexing load.
For ref this link shows some of the date possibilities.
http://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout
Look for date{pattern}
... View more
05-05-2016
03:34 AM
1 Karma
This seems to be a recent change that came in with v6.4. I have found if you add the following to the beginning of your search it will include the search head and all other servers.
splunk_server=*
... View more
09-23-2015
07:07 AM
You need to tell splunk that it is using a diferent line breaker. On your indexer, create a props.conf stanza something like this
[source::my/source/file.log]
LINE_BREAKER = [\x02\x03]+
you may want to replace the source with your sourcetype.
See http://docs.splunk.com/Documentation/Splunk/latest/Data/Indexmulti-lineevents for more details.
... View more
05-28-2015
03:59 PM
1 Karma
If there are items missing, you probably have them set as private.
Go to settings All configurations chose your app from the dropdown and check on Show only objects created in this app context .
Click on the word sharing twice so it has a down arrow next to it.
Anything that has private in that column will not be packaged in your app so click on Permissions , change it to This app only and save
Repeat for all the private items. Then package using either of the above methods.
... View more
03-30-2015
02:33 PM
1 Karma
No this will not be a problem. The forwarder gracefully stops and waits for the indexer to become available again. It will log that it can't get to the indexer but that isn't a problem.
In ES you can state which devices are expected and which are intermittent so as long as you set this correctly it will not complain. But one thing to be aware of is ES normally only looks at the last 24 hours for security issues. If splunk doesn't get the logs for longer, it may not be detected. You can probably adjust the corrolation searches to allow for this.
... View more
02-06-2015
08:10 AM
Ignore the 5. before time format. I don't know where that came from.
... View more