Activity Feed
- Got Karma for Distinguish which Heavy Forwarder an event passed through?. 09-23-2024 07:35 AM
- Posted Send internal logs to 2nd index cluster on Monitoring Splunk. 10-26-2023 01:36 PM
- Posted How to display SPL to chart events? on Splunk Search. 03-24-2022 11:46 AM
- Posted What lookup permission or capability isn't set properly? on Splunk Enterprise. 02-09-2022 10:38 AM
- Tagged What lookup permission or capability isn't set properly? on Splunk Enterprise. 02-09-2022 10:38 AM
- Posted Re: How to pass CSV values to a search via macro? on Splunk Search. 12-21-2021 11:45 AM
- Posted How to pass CSV values to a search via macro? on Splunk Search. 12-21-2021 10:50 AM
- Posted Splunk Stream on single instance deployment (Linux) in a Windows environment on Splunk Data Stream Processor. 09-28-2021 02:10 PM
- Posted Eval based on multivalue field and _time on Splunk Search. 02-11-2021 02:43 PM
- Got Karma for Distinguish which Heavy Forwarder an event passed through?. 06-19-2020 07:47 AM
- Got Karma for Setting up Multisite Cluster, why can't Cluster Peers (indexers) start?. 06-05-2020 12:49 AM
- Karma Re: Is it possible to use a value in a lookup in order to automatically adjust the time range a scheduled search runs? for somesoni2. 06-05-2020 12:48 AM
- Karma Re: Subsearches compairing datasets for lguinn2. 06-05-2020 12:48 AM
- Karma Re: How do I simply insert an image into a dashboard? for niketn. 06-05-2020 12:48 AM
- Got Karma for Splunk Add-on for Tenable: How to correctly filter events to nullQueue from Tenable?. 06-05-2020 12:48 AM
- Got Karma for Table Data Bar - Customize. 06-05-2020 12:48 AM
- Karma Re: Difference between outputlookup and outputcsv for alacercogitatus. 06-05-2020 12:47 AM
- Karma Re: What is the best way to design/define roles? for lguinn2. 06-05-2020 12:46 AM
- Got Karma for Re: What does "Events may not be returned in sub-second order due to memory pressure." mean?. 06-05-2020 12:46 AM
- Posted How find index-time field extractions. on Getting Data In. 12-17-2019 08:44 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
10-26-2023
01:36 PM
I have a few thousand universal forwarders, managed by a deployment server, and we're sending all logs (internal and non-internal) to index cluster A. In addition, I would like to send all internal Splunk logs to index cluster B. What's the simplest app package I can deploy via the deployment server to send a 2nd set of all internal logs from universal forwarders to index cluster B?
... View more
Labels
- Labels:
-
forwarder
-
indexer
-
indexer clustering
03-24-2022
11:46 AM
I have a search I can compose using multiple appends and sub-searches to accomplish, but I assume there's an easier way I'm just not seeing, and hoping someone can help. (maybe using | chart?)
Essentially, I have a set of user login data... username and login_event (successful, failed, account locked, etc...).
I'd like to display a chart showing total events (by login_event) and distinctive count by username, which might look like below:
login_event
count
successful
1600
failed
200
account locked
10
successful (distinct usernames)
1200
failed (distinct usernames)
50
account locked (distinct usernames)
9
... View more
- Tags:
- splunk-search
02-09-2022
10:38 AM
We're running Splunk 8.1.7.2. I am an admin. I have created a lookup file (my_lookup.csv), and lookup definition (my_lookup) referencing that file, in an app (my_app). Both the lookup file and definition have permission set to "All Apps (system)" and "Everyone Read", write is for admin only.
When I run the following searches I see contents of the lookup files as expected: | inputlookup my_lookup.csv OR | inputlookup my_lookup
However, when my users attempts to run the search above, they get the following errors: -"The lookup table 'my_lookup.csv' requires a .csv or KV store lookup definition." -"The lookup table 'my_lookup' is invalid." I don't understand how this could be. Also, it's worth pointing out the user used to be able to get results.
What permission or capability isn't set properly?
Any help is greatly appreciated. Thanks.
... View more
- Tags:
- lookup
- permissions
Labels
12-21-2021
11:45 AM
I was receiving various parsing errors, depending on changes I was making in attempt to get it to work. Never received results. I suppose I should try to get the search to work without a macro first..... so, using the lookup to fill data into the "filter" parameter for the pivot. I can do it with dashboard tokens, but, not sure how to do it in SPL alone.
... View more
12-21-2021
10:50 AM
We have a foo.csv which will be updated regularly, and we have searches which require some of the data in foo.csv to run properly. I would like to solve this using a macro in the searches, but am having difficulties. foo.csv field1,field2,field3
bar11,bar21,bar31
bar12,bar22,bar32
bar13,bar23,bar33 I need "bar11","bar12","bar13" to be inserted to a search, like so: | pivot fooDM barData
min(blah) AS min_blah
filter field1 in ("bar11","bar12","bar13") So I created a macro which (when run alone in a search) gives a quoted comma separated list, myMacro: [| inputlookup foo.csv
| strcat "\"" field1 "\"" field1
| stats values(field1) AS field1
| eval search=mvjoin(field1, ",")
| fields search] The above macro I've attempted both "Use eval-based definition" and not, and place it in search like this: | pivot fooDM barData
min(blah) AS min_blah
filter field1 in (`myMacro`) I would love any help. Thank you!
... View more
09-28-2021
02:10 PM
We have a very small test enviroment, with a single instance Splunk server (running on Linux) and a handful of Windows servers with UFs installed. I'm attempting to use Splunk Stream to monitor NIC traffic on the Windows UFs. Following the Splunk Stream docs precisely is confusing (and in many cases just wrong). https://docs.splunk.com/Documentation/StreamApp/7.4.0/DeployStreamApp/AboutSplunkStream I'm at the point I want to use the Splunk server's deployment server functionality to distribute the Splunk_TA_stream to the Windows UFs, but I'm confused on how to properly configure the Splunk_TA_stream app before deploying it. (Docs say, Splunk_TA_stream will be installed in SPLUNK_HOME/etc/deployment-apps preconfigured... this is certainly not true in my case.) I'm at a loss of how to configure Splunk_TA_stream before deploying it (via deployment server) to the Windows UFs. Any insight is greatly appreciated. Thanks
... View more
Labels
- Labels:
-
installation
-
troubleshooting
02-11-2021
02:43 PM
I have a set of results with _time, many single value fields, and a multivalue field which contains a large set of epoch values (mv_epoch). I want an eval to test if any of the mv_epoch values are between relative_time(_time, "-30d@d") and _time. So something like: ....search results | stats values(mv_epoch) AS mv_epoch values(field_a)... BY _time | eval test=if((relative_time(_time, "-30d@d")<=mv_epoch AND mv_epoch<=_time),"yes","no") Looking to solve this without using |mvexand. Any help is greatly appreciated, thanks!
... View more
Labels
- Labels:
-
eval
12-17-2019
08:44 AM
Hello all,
Our environment has some custom index-time field extractions we find to be very useful (yes, I know Splunk doesn't recommend this). Though due to the possible performance implications of this practice, I want to be 100% confident I know where all index-time fields exist in our indexes.
At first, I thought this would be easy, throw a |tstats command together... when it dawned on me I have no idea how to do this.
So, if anyone can think of how to get a list of indexes/sources/sourcetypes which contain non-standard index-time field extractions, that'd be a life-saver!
Thanks for any help.
... View more
05-03-2019
06:33 AM
Just re-reading my post, and to clarify what I have in mind, I'm looking for probably a | rest command and logic to determine whether the search was run by the scheduler, or if it were run manually.
Thanks again.
... View more
04-30-2019
12:33 PM
Yeah, life sucks.
Anything else?
... View more
04-30-2019
12:04 PM
That doesn't prevent accidents. I suppose accidents are always possible, but I'm sure you can see it'd be very easy for someone to OPEN the search, instead of click Edit to clone it... sending out hundreds of unexpected emails. Or one power user clicks on a saved search another power user created, sending out hundreds of unexpected emails.
I'm looking for a technical solution to reduce extremely visible errors, in addition to trying to be careful.
... View more
04-30-2019
11:47 AM
In a report I'm building, I'm using the | map command to send emails to many recipients, each with their own custom view of data. A problem I've run into while editing the search is, I do not want to accidentally send many erroneous reports via email if I run the search while testing/editing, or even accidentally opening the search. I've come up with a rough solution, but, am wondering if someone has a better idea.
Basically I've created a macro that:
1) uses | rest to check the cron the search is scheduled for,
2) guesses at the epoch time cron_guess the search would have run at today (this logic breaks if the cron doesn't follow MM HH * * * format, e.g. 0,15,30,45 12 * * * breaks my logic)
3) checks to see if cron_guess = now()
After that, I use ranOnCron =1 to set the real email addresses, or ranOnCron =0 to set email addresses to my test account, preventing any "true" emails from going out.
This works for my purposes, but, I'd love a more robust solution if anyone knows of something. Accidentally sending hundreds of emails to hundreds of people with garbage data isn't fun.
Thanks!
[ranOnCron(3)]
args = NS_user, NS_app, saved_search
definition = eval ranOnCron=
[| rest splunk_server=local /servicesNS/$NS_user$/$NS_app$/saved/searches
| search title="$saved_search$"
| rex field=cron_schedule "^(?<cron_min>\d+)\s+(?<cron_hour>\d+)\s+"
| eval cron_guess=floor(relative_time(now(), "@d"))+tonumber(cron_min)*60+tonumber(cron_hour)*60*60
| eval runOnCron_sec_min_hour=if(cron_guess==now(), 1, 0)
| return $runOnCron_sec_min_hour]
... View more
04-18-2019
04:40 PM
Yes, btool showed the settings I was attempting to implement, within the app I was attempting to implement them with.
... View more
04-18-2019
02:15 PM
But files in apps have a higher priority than those in system/default, obviously. There were no conflicts with these settings in other apps.
... View more
04-18-2019
05:59 AM
AH HA! Success!
Once I changed the limits.conf in system/local it worked. Which means the limits.conf [pdf] stanza is not being properly read by splunk in etc/apps, even though it's showing up in btool. Strange.
BUG ALERT!
... View more
04-18-2019
05:51 AM
@niketnilay I've confirmed the limits with btool, but, I also upped the values to what you've had success with, and I still am limited to 1000 rows. Yes, Splunk has been restarted after changing values.
I created a different dashboard to load the 1440 rows quickly
<dashboard>
<label>test 1400 rows with makeresults</label>
<row>
<panel>
<title>1440 rows fast</title>
<table>
<search>
<query>| makeresults count=1440
| streamstats count AS rows</query>
<earliest>-24h@h</earliest>
<latest>now</latest>
<sampleRatio>1</sampleRatio>
</search>
<option name="count">50</option>
<option name="dataOverlayMode">none</option>
<option name="drilldown">none</option>
<option name="percentagesRow">false</option>
<option name="rowNumbers">true</option>
<option name="totalsRow">false</option>
<option name="wrap">true</option>
</table>
</panel>
</row>
</dashboard>
When using either Export > Export PDF, or Export > Schedule PDF delivery the PDF is limited to 1000 rows.
Which version of Splunk are you using? I'm on 6.6.11.
Would you mind testing my exact XML code above?
Thanks.
... View more
04-17-2019
12:53 PM
I'm in the process of building some high-priority dashboards for my management (time critical), and I'm having a problem when I schedule the PDF for delivery. One of my tables has 1370 rows, but the PDF version stops at 1000 rows.
Following guides here: https://docs.splunk.com/Documentation/Splunk/6.6.11/Viz/DashboardPDFs#Additional_configurations_for_PDF_printing
I discovered the defaults in limits.conf are:
[pdf]
max_mem_usage_mb = 200
max_rows_per_table = 1000
render_endpoint_timeout = 3600
I've changed them to the following, by pushing an app and restarting:
[pdf]
max_mem_usage_mb = 300
max_rows_per_table = 2000
render_endpoint_timeout = 3600
The table still stops at 1000 rows in the PDF.
Is this limitation not surpassable?
Any help is greatly apprecitated. Thank you.
... View more
10-30-2018
01:58 PM
Is there a resource for indexing powershell transcription files?
We're using PowerShell 5.1. I've reviewed the information provided in a 2016 Splunk .conf talk here: https://conf.splunk.com/files/2016/recordings/powershell-power-hell-hunting-for-malicious-use-of-powershell-with-splunk.mp4
But the info in the talk isn't truly complete. For instance, our transcription files don't always have the "End time" footer, and can contain multiple headers (Start time:, Username:, RunAs User:, etc) within a "Windows PowerShell transcript start" event.
Is there no TA for this?
Example problem file:
**********************
Windows PowerShell transcript start
Start time: 20181026141406
Username: foo/bar
RunAs User: foo/bar
Machine: foohostbar (Microsoft Windows NT 10.0.15063.0)
Host Application: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Process ID: 10916
PSVersion: 5.1.15063.1387
PSEdition: Desktop
PSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.15063.1387
BuildVersion: 10.0.15063.1387
CLRVersion: 4.0.30319.42000
WSManStackVersion: 3.0
PSRemotingProtocolVersion: 2.3
SerializationVersion: 1.1.0.1
**********************
**********************
Command start time: 20181026141425
**********************
PS R:\> get-adgroup compliance
DistinguishedName : stuff
GroupCategory : more stuff
GroupScope : yup, here's our stuff
Name : and more stuff
ObjectClass : and more stuff
ObjectGUID : and more stuff
SamAccountName : and more stuff
SID : and more stuff
**********************
Command start time: 20181026141442
**********************
PS R:\> get-adgroup compliance |Get-ADGroupMember
distinguishedName : stuff
name : and more stuff
objectClass : and more stuff
objectGUID : and more stuff
SamAccountName : and more stuff
SID : and more stuff
distinguishedName : and more stuff
name : and more stuff
objectClass : and more stuff
objectGUID : and more stuff
SamAccountName : and more stuff
SID : and more stuff
... a few hundred lines later....
**********************
Command start time: 20181026143530
**********************
PS R:\> TerminatingError(Export-Csv): "The process cannot access the file 'stuff' because it is being used by another process."
**********************
Windows PowerShell transcript start
Start time: 20181026141406
Username: foo/bar
RunAs User: foo/bar
Machine: foohostbar (Microsoft Windows NT 10.0.15063.0)
Host Application: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Process ID: 10916
PSVersion: 5.1.15063.1387
PSEdition: Desktop
PSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.15063.1387
BuildVersion: 10.0.15063.1387
CLRVersion: 4.0.30319.42000
WSManStackVersion: 3.0
PSRemotingProtocolVersion: 2.3
SerializationVersion: 1.1.0.1
**********************
**********************
Command start time: 20181026143530
**********************
PS>CommandInvocation(Out-String): "Out-String"
>> ParameterBinding(Out-String): name="InputObject"; value="The process cannot access the file 'stuff' because it is being used by another process."
export-csv : The process cannot access the file 'stuff' because it is being used by another
process.
At line:3 char:31
+ ... oupmember $groupnayme|export-csv $groupout -force -NoTypeInformation}
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OpenError: (:) [Export-Csv], IOException
+ FullyQualifiedErrorId : FileOpenFailure,Microsoft.PowerShell.Commands.ExportCsvCommand
export-csv : The process cannot access the file 'stuff' because it is being used by another
process.
At line:3 char:31
+ ... oupmember $groupnayme|export-csv $groupout -force -NoTypeInformation}
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OpenError: (:) [Export-Csv], IOException
+ FullyQualifiedErrorId : FileOpenFailure,Microsoft.PowerShell.Commands.ExportCsvCommand
export-csv : The process cannot access the file 'stuff' because it is being used by another
process.
At line:3 char:31
+ ... oupmember $groupnayme|export-csv $groupout -force -NoTypeInformation}
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OpenError: (:) [Export-Csv], IOException
+ FullyQualifiedErrorId : FileOpenFailure,Microsoft.PowerShell.Commands.ExportCsvCommand
Notice the lack of:
**********************
Windows PowerShell transcript end
End time: 20181026094046
**********************
Any help is greatly appreciated.
... View more
08-27-2018
09:29 AM
Thanks for the response, I should clarify my desired goal.
I'd like a timechart (coverage of 12 months, but span=1 month) which displays only 1 value per month. And that value is the number of "up" systems seen in the last data indexing for the month.
Your response did lead me to a solution which works, however, doing two timecharts in a row seems sloppy, any suggestions for a more elegant solution?
... my search
| eval state=if(system_status="up", 1, 0)
| timechart span=d@d sum(state) AS state
| timechart span=mon@mon last(state) AS state
... View more
08-24-2018
01:00 PM
So, I've simplified my real problem down to this example with as few variables as possible. I wish I could simply alter the manor which the data is coming in, but, I can not, so I need a solution via SPL.
Here it goes:
Almost daily Splunk indexes a set of data that has two important fields, system_id and system_status. system_id is a unique identifier to each system, and system_status can have the values of "up" or "down". This data is indexed all at once, almost daily. And example of events would look like this:
One day:
08/24/2018T01:00:00 5671 up
08/24/2018T01:00:00 5672 up
08/24/2018T01:00:00 5673 down
08/24/2018T01:00:00 5674 up
08/24/2018T01:00:00 5675 up
08/24/2018T01:00:00 5676 down
08/24/2018T01:00:00 5677 up
The next day:
08/25/2018T01:00:00 5671 up
08/25/2018T01:00:00 5672 up
08/25/2018T01:00:00 5673 up
08/25/2018T01:00:00 5674 up
08/25/2018T01:00:00 5675 up
08/25/2018T01:00:00 5676 down
08/25/2018T01:00:00 5677 up
My goal: a timechart which shows the count of the number of systems "up" for the last data indexed each month. If it helps, each system_id is guaranteed to be in each set of indexed data.
This seems deceptively difficult. Many thanks to any help!
... View more
07-06-2018
08:43 AM
2 Karma
Hello,
I've been looking through documentation and other answers, and would like some ideas on our specific use case.
Essentially, we have 1 Search Head, 1 Indexer, a dozen Heavy Forwarders, and each Heavy Forwarder has an arbitrary and continuously changing number of UFs sending them data. All running 6.7.7.
We are in a situation where we would like to deploy conf files to heavy forwarders/indexer/search head as needed, so we can identify which heavy forwarder each event has passed through. Ideally we would do this without affecting the current parsing happening to the events, which means I'm not comfortable altering the _raw (as I've seen mentioned in some answers) by appending information to the _raw at the Heavy Forwarders, and then extracting the field at index time onto indexers, or search time at search heads.
Looking at suggestions on this post, I have a couple questions: https://answers.splunk.com/answers/1453/how-do-i-add-metadata-to-events-coming-from-a-splunk-forwarder.html
What would adding this to inputs.conf on a Heavy Forwarder do for us? Is this useful?
[default]
disabled = 0
_meta = Terminal::1
Perhaps instead something like, on the Heavy Forwarder inputs.conf:
[default]
location = mylocation
then on Indexer, props.conf:
[default]
TRANSFORMS-location = addlocation
and transforms.conf:
[addlocation]
SOURCE_KEY = location
REGEX = (.*)
FORMAT = location::$1
WRITE_META = true
Any insight and advice is greatly appreciated.
Thank you!
... View more
06-13-2018
01:01 PM
I have a Windows 2008 R2 server with a Splunk UF v6.6.7 installed.
We are monitoring many files on this server. Occasionally our data looks weird, and we come to find out a file wasn't indexed as we'd expect. Today a file wasn't indexed properly, so I looked at the logs, sure enough there is a TailReader ERROR. I'm sure restarting the Splunk UF will resolve (this is how we've fixed it before), but I'd REALLY love to know why this is happening, and prevent it.
Scrubbed btool of the inputs.conf stanza:
[monitor:D:\Logs\<my_folder>]
_rcvbuf = 1572864
crcSalt = <SOURCE>
disabled = false
evt_dc_name =
evt_dns_name =
evt_resolve_ad_obj = 0
host = <my_host>
ignoreOlderThan = 14d
index = <my_index>
sourcetype = <my_sourcetype>
whitelist = <my_filename_prefix>.+\.csv
Scrubbed log from today when file wasn't indexed:
06-13-2018 15:17:22.014 -0400 ERROR TailReader - error from read call from 'D:\Logs\<my_folder>\<my_filename_prefix>_06-13-2018.csv'.
Any help is greatly appreciated! Thank you!
Adam
... View more
05-16-2018
07:24 AM
Let me first say, I'm sure I could write a search that essentially returns what I'm looking for, however due to the amount and nature of the data it would not be an fast running search. I'm looking to broaden my horizons using data models and acceleration to make this more efficient.
The goal: provide daily reports on activity for "flagged" accounts, only WHILE they are flagged. Ex, if an account is flagged from 5/16/2018 12:00 pm to 2:00 pm, I'd like to find authentication activity for that account during that 2 hour period for which it was flagged.
I can easily write a search to return a list of accounts which had been flagged and the time span they were flagged for, but, I'm looking for advice on efficient ways to find the authentication events related to that flagged period. We have ES, with an accelerated authentication data model, though I don't have a good sense of how to apply my query to it. Would this be a custom correlation search in ES?
Any suggestions, links, examples are highly appreciated.
Thank you.
... View more