Activity Feed
- Got Karma for Re: Why is KV store initialization failing after fassert() error in mongod.log?. 12-13-2024 05:48 AM
- Posted After upgrade, emails are not being sent on triggered alerts. on Alerting. 02-10-2022 09:54 AM
- Posted How is indexing daily amount configured? on Splunk Enterprise. 01-26-2022 05:59 AM
- Got Karma for Re: How do I set up inputs.conf to allow for a cloud application to send syslog over a SSL connection?. 02-04-2021 04:27 PM
- Posted Syslog Data is not being parsed correctly by Fortinet Fortigate App on All Apps and Add-ons. 08-19-2020 10:50 AM
- Got Karma for Re: How do I set up inputs.conf to allow for a cloud application to send syslog over a SSL connection?. 06-05-2020 12:50 AM
- Got Karma for How do I stop getting duplicate entries of JSON data from an API feed?. 06-05-2020 12:50 AM
- Got Karma for Re: Is Mongodb vulnerable to outside manipulation?. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk Enterprise - How do I move indexed data from a cluster of indexers to a single instance?. 06-05-2020 12:48 AM
- Got Karma for Re: Trying to extract the value of a field which occurs twice in one event. Regex maybe?. 06-05-2020 12:48 AM
- Got Karma for Why am I getting "Invalid key in stanza [lookup:cam_category_lookup] in E:\Splunk\etc\apps\Splunk_SA_CIM\default\managed_configurations.conf, line 34: expose (value: 1)". 06-05-2020 12:48 AM
- Got Karma for How do I add a time range to a datamodel search that cannot use tstats?. 06-05-2020 12:48 AM
- Got Karma for How do I match an IP address to a range that spans multiple CIDRs?. 06-05-2020 12:48 AM
- Got Karma for Re: Why is KV store initialization failing after fassert() error in mongod.log?. 06-05-2020 12:48 AM
- Got Karma for Re: Why am I now getting "SSL configuration issue: invalid CA public key file" from Splunk Supporting Add-on for Active Directory after upgrading ?. 06-05-2020 12:48 AM
- Got Karma for Why am I now getting "SSL configuration issue: invalid CA public key file" from Splunk Supporting Add-on for Active Directory after upgrading ?. 06-05-2020 12:48 AM
- Got Karma for Why am I now getting "SSL configuration issue: invalid CA public key file" from Splunk Supporting Add-on for Active Directory after upgrading ?. 06-05-2020 12:48 AM
- Got Karma for What is the difference between splunk_managment_console and splunk_monitoring_console?. 06-05-2020 12:48 AM
- Got Karma for Why is splunkd.log not getting indexed? Receiving error "The file 'E:\Splunk\var\log\splunk\splunkd.log' is invalid. Reason: binary". 06-05-2020 12:48 AM
- Got Karma for Re: Why is KV store initialization failing after fassert() error in mongod.log?. 06-05-2020 12:48 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 |
09-06-2017
05:23 AM
Could you be a little more specific? What entity should I be looking at that would be "logging in" in such a silly fashion?
... View more
09-01-2017
06:14 AM
I have multiple monitored csv files that are created every day at different times on a single server with a Universal Forwarder. Old files are deleted and completely new files are created. Each file is indexed when created and then again at 05:30 am the next day causing duplicate data.
Looking through Splunk>answers, I found where I should add the crcSalt = line in the [monitor] section of inputs.conf. I did this for one of the files and the file is still being indexed twice.
What else should I do to stop the second indexing?
I do find it interesting that the second indexing for these files happen at the same time. Is there some config that sets that time? Just wondering.
Thanks for any help provided.
Scott
... View more
08-21-2017
12:50 PM
Answer is no longer needed. We now have a single system Splunk instance, so load balancing to indexers is not needed.
... View more
08-21-2017
12:47 PM
To solve this problem, I set up a Windows bat file to execute a .py file that saves the data into a json file on the local drive. Splunk monitors this file without any issues.
... View more
08-21-2017
12:40 PM
1 Karma
I did move the data from \splunk\var\lib\splunk\ from the old systems to the new system. I had to modify the manifest.csv and the and .bucketManifest files with the folder names of the old data. After a restart, the data showed up!
... View more
08-17-2017
09:58 AM
I opened a case with support. They found that there was a local/savesearches.conf file that had the searches disabled. I deleted the file, restarted splunk and the setup was able to compete.
... View more
07-18-2017
08:39 AM
I am in the process of shrinking my Splunk configuration from a Distributed setup to a Single instance. I did a fresh install of Splunk Enterprise, moved old indexed data to the new system and starting to configure the Apps and Add-ons. While running the Splunk App for Windows Infrastructure's Guided setup, it passes the Prerequisites (Spunk v6.6.1, Splunk Add-on for Windows v4.8.4 and Splunk Supporting Add-on for Windows Active Directory v2.1.4), passes Check data and then starts to experience issues. Using the Detect Features button, it starts looking for Windows and AD features.
The status window shows -
WinApp_Lookup_Build_Perfmon - Update - Server could not be built.
WinApp_Lookup_Build_Perfmon - Update - Detail could not be built.
WinApp_Lookup_Build_Event - Update - Server could not be built.
WinApp_Lookup_Build_Event - Update - Detail could not be built.
WinApp_Lookup_Build_Hostmon - Update - Server could not be built.
WinApp_Lookup_Build_Hostmon_Machine - Update - Detail could not be built.
WinApp_Lookup_Build_Hostmon_FS - Update - Detail could not be built.
WinApp_Lookup_Build_Hostmon_Process - Update - Detail could not be built.
WinApp_Lookup_Build_Hostmon_Services - Update - Detail could not be built.
WinApp_Lookup_Build_Netmon - Update - Server could not be built.
WinApp_Lookup_Build_Netmon - Update - Detail could not be built.
WinApp_Lookup_Build_Printmon - Update could not be built.
DomainSelector_Lookup could not be built.
HostToDomain_Lookup_Update could not be built.
tHostInfo_Lookup_Update could not be built.
tSessions_Lookup_Update could not be built.
SiteInfo_Lookup_Update could not be built.
ActiveDirectory: Update GPO Lookup could not be built.
ActiveDirectory: Update Group Lookup could not be built.
ActiveDirectory: Update User Lookup could not be built.
ActiveDirectory: Update Computer Lookup could not be built.
I then finish and look at the Overview, some data is populated but not enough to be useful.
I do not see any relevant errors in splunkd.log and am stumped to where to look next.
Any help is appreciated.
... View more
06-13-2017
12:56 PM
Looking at the doc, it appears to me that only data that would be removed (cold to frozen) is instead being archived. It doesn't look like all the indexed data wouild be moved using this process.
I did a test run of coping the colddb, datamodel_summary_db and thaweddb directories of an index plus the .dat file in /var/lib/splunk location and it looks like the indexed data is up to data and searchable. I am going that route, Fingers crossed.
... View more
06-13-2017
08:37 AM
We index about 6GB of data on weekdays with only 3GB on the weekends.
When you say "roll your buckets from the cluster to frozen and then thaw those on the standalone..." how would I do that?
As for ES, we use it for is get the treat feeds.
Regards,
Scott
... View more
06-12-2017
04:42 AM
I have a Master, 2 indexer cluster, 1 Enterprise Security dedicated system and 1 search head.
I want to drop ES and go back to a single system.
As for Support, that is where I got the answers about bugs. Specifically, the response was that bugs SPL-140260 and SPL-140831 were the problem and Support suggested that I downgrade to version 6.4.x. Of course, there is no supported way to downgrade.
I am spending too much time playing whack-a-mole trying to keep these systems up. There are too many crashes, too much overhead traffic, etc.
... View more
06-09-2017
11:59 AM
Due to the amount of bugs that I have been running into since moving to a clustered environment, I want to return to a single instance.
I found http://docs.splunk.com/Documentation/Splunk/6.6.1/Indexer/Moveanindex and it looks like all I have to do is move the \Splunk\var\lib\splunk directory to the new system. But nothing is ever that easy.
I am worried that since the buckets are duplicated, i will end up with too much data and Splunk will not know how to process that extra data.
I am also ok with some telling me how to roll back my clustered systems into a single system.
Thanks.
... View more
05-18-2017
01:00 PM
1 Karma
Rob,
You can set a the query using the "rex" command and then mvindex using "eval".
Something like this -
| rex field=_raw max_match=5 "Account Name:\s+(?\w+\$?)"
| eval Wanted_ID=mvindex(Account_Name,1)
Note – The “1” in the mvindex returns the second instance of “Account Name”, count starts at 0.
Hope this helps,
Scott
... View more
05-03-2017
08:01 AM
I added the new system and set the replication factor to 3. Looking at the Indexer Clustering Master Node, it shows all data is searchable, search factor is met and replication factor is met. It also show that there are 3 Replicated Data Copies for the Indexes and 2 searchable data copies. Does that mean the there are 3 copies of all data and I am good to go on shutting down the problem indexer?
... View more
04-28-2017
11:50 AM
It appears to Splunk that the disk is slow. Operations has verified that the VM configuration is the same for the existing systems, same cores, same amount of memory and both on SSD.
In the past, cloning a system has caused issues. That is why I want to do a clean install of Splunk.
As the SSD array is fully allocated, this new system's disk will be on standard storage and I might not be able to get the same amount of disk space on the new system. When I increase the replication factor, does Splunk duplicate every "bucket" across all systems? I am concerned that I might fill up the new system's disk space quickly and cause more response problems.
... View more
04-28-2017
08:38 AM
I am currenly running a 2 system indexing cluster on Windows VMs. One of the systems is experiencing poor performance causing my search head to run slowly. What I want to do is configure a new VM, install a fresh Splunk Enterprise application and move all incoming data to this new system and bypass the old indexer. With the replication going on between the indexers in this cluster, can I shut down the slow indexer and not lose and indexed data?
If this is not feasible, is there a way to migrate the data from the old system to the new one?
... View more
04-28-2017
08:03 AM
I had the group that issues the report that is uploaded to Splunk move the timestamp column to be the first column. So far, Splunk is seeing the correct date/time.
... View more
04-19-2017
11:52 AM
Giuseppe,
It looks like that is pulling in data now. However, the time in the "Event timestamp" field is not being indexed correctly. The entry in the "Event timestamp" field data is in this format - 4/17/2017 12:05:28 PM or 4/17/2017 2:27:43 PM. When I run a query against the record, the indexed data shows as correct but the _time field is incorrect. Spunk shows
csv entry Indexed entry _time
4/13/2017 5:57 4/13/2017 5:57:00 AM 2017-04-13T05:57:00.000-05:00
4/13/2017 15:01 4/13/2017 3:01:10 PM 2017-04-13T03:01:00.000-05:00
What is happening is that it is not converting the 24 hour clock correctly. I tried to modifying the timestamp format in the prop.conf file to %m/%d/%Y %H:%M or %m/%d%Y %I:%M:%S %p but nothing changed.
Any help would be appreciated.
Scott
... View more
04-18-2017
05:54 AM
I have set up a directory on a Windows system to be monitored by a UF. Two csv files are created every night and are getting indexed. However, the timestamp is the time the file is created, not the time that is in the "Timestamp Fields" parameter.
The first line of my csv file is -
Event,Door,Side,First name,Last name,Picture,Credential,Supplemental credential,Event timestamp,Credential code,Card format
Event timestamp is in this format 4/15/2017 3:45:15 PM
The defined parameters under source type are Catagory - Structured, Indexed Extractions -csv, Extraction - Advanced, Timestamp fields
- Event timestamp. All others are set to default.
props.conf contains -
[logs]
category = Structured
pulldown_type = 1
DATETIME_CONFIG =
HEADER_FIELD_LINE_NUMBER =
INDEXED_EXTRACTIONS = csv
NO_BINARY_CHECK = true
TIMESTAMP_FIELDS = Event timestamp
description = Door log
disabled = false
FIELD_QUOTE = '
The second problem is that not all lines of the file not be indexed. I cannot find any parameter that would restrict the size of a file to be indexed.
... View more
03-23-2017
06:43 AM
Using the streamstats / eventstats as suggested I was able to get the chart I needed.
index=* sourcetype=WinEventLog:Security LogName=Security EventCode=4624
| eval day=strftime(_time, "%m%d%y")
| streamstats count(eval(EventCode="4624")) AS pos4624 BY user day
| eventstats earliest(pos4624) AS first4624 BY user day
| where pos4624 = first4624
| append [search index=* sourcetype=WinEventLog:Security LogName=Security EventCode=4634
| eval day=strftime(_time, "%m%d%y") | streamstats count(eval(EventCode="4634")) AS pos4634 BY user day
| eventstats latest(pos4634) AS latest4634 BY user day| where pos4634 = latest4634 ]
| timechart span=1h count(first4624) AS Logon count(latest4634) AS Logoff
... View more
03-17-2017
07:17 AM
woodcock, thank you for the answer. I guess I need to define what I am trying to do a litter better.
I want to have a line graph that shows each user's first logon and last logoff per day. My graph should look something like - 2 lines with the logon line around zero until 09:00 when logons will spike then drop close to zero for the rest of the day and a logoff line that will be around zero until 18:00 when it will spike and then drop close to zero. I can get the data point using one of several searches but when I try and use a bin of 1h in stats or timechart, i get the a logoff and logoff for every hour, not only the first logon and last logoff. Windows AD throws out constant 4624 and 4634 messages while the user is connected, causing any graph to show to straight lines.
Hope this makes my predicament clearer.
... View more
03-16-2017
05:45 AM
I get a nice table with the logon and logoff times per user using the following search -
LogName=Security EventCode=4624
| stats earliest(_time) AS LOGON by user
| join [ search LogName=Security EventCode=4634
| stats latest(_time) AS LOGOFF by user]
| eval LOGON=strftime(LOGON,"%H:%M"), LOGOFF=strftime(LOGOFF,"%H:%M")
What I would like to do i create a graph showing the count of logon and logoff by user broken down by hour. The problem is that Windows creates multiple 4624 and 4634 messages. As timechart has a span of 1 hour, it picks up these "duplicate" messages and I get an entry for every hour the user is online.
How would I create a query to only count and chart the first logon message and the last logoff message per user by hour?
Thanks.
... View more
03-13-2017
01:20 PM
We are moving to a new Anti-Virus vendor and I will need to add the add-on (TA) for the new vendor. My question concerns the old TA. If I remove the $SPLUNK_HOME/etc/master-apps/ section concerning the old vendor, does that remove the TA from the cluster nodes? Or am I going to have to delete the TA using "splunk remove app -auth user:PW" command?
... View more
02-23-2017
10:22 AM
I ended up having to break the search into 2 parts, the first one creating a lookup CSV file that matched the UserName to Title
|ldapsearch domain=default search="(&(objectClass=user)(!(objectclass=computer)))"
| dedup cn title
| table cn title
| rename cn AS UserName, title AS Title
| outputlookup ldaptitletouser.csv
I then used this file to do a lookup to match up the UserName in the Group search -
|ldapsearch domain=default search="(objectClass=group)"
|table cn, member
| rex field=member "CN=(?P\w*\s\w*)"
| rename cn AS "Group"
| table Group, UserName
| lookup ldaptitletouser.csv UserName OUTPUT Title
| table Group UserName Title
Hopefully someone else can use this.
... View more
02-14-2017
12:12 PM
Using the Splunk Supporting Add-on for Active Directory, I have been tasked to find out which users are assigned to specific groups. I can get a table showing the "Common Name" of the users in each group -
|ldapsearch domain=default search="(objectClass=group)"|table cn,distinguishedName
|ldapgroup|rex field=member_dn "CN=(?P\w*\s\w*)"| table cn,UserName | rename cn AS "Group"
Results of the search looks like this
Group UserName
IT Support Fred Flintstone
[blank] Barney Rubble
.
.
Security Thomas Magnum
[blank] Frank Cannon
I then run the following search to get the title of the user -
|ldapsearch domain=default search="(&(objectClass=user)(!(objectclass=computer)))" | dedup cn title | table cn title | rename cn AS UserName, title AS Title
Search results look like this -
UserName Title
Fred Flintstone Computer Analyst
Barney Rubble Senior Computer Analyst
Thomas Magnum Security Guard
Frank Cannon Security Manager
I what to have a table that combines the searches to look like this -
Group UserName Title
IT Support Fred Flintstone Computer Analyst
[blank] Barney Rubble Senior Computer Analyst
.
.
Security Thomas Magnum Security Guard
[blank] Frank Cannon Security Manager
I have tried join, append, appendcols and cannot get the items to line up correctly. What am I missing?
... View more
01-13-2017
12:37 PM
1 Karma
I opened a support ticket because this could be an issue during our compliance audits.
Splunk answered -
(start)
MongoDB is only used by Splunk in this context, so here is almost no risk of malware as there needs to be some type of human interaction to achieve (and Splunk is the only user of MongoDB here).
However, we are constantly vigilant about any threats or vulnerabilities. Here is an example of how SSLv3 was vulnerable to the POODLE attack (inclusive of MongoDB) and how it can be mitigated.
Long story short, if you have SSLv3 turned on, then you could be vulnerable.
(end)
As I had SSLv3 on under \etc\system\default\web.conf (sslVersions = ssl3, tls), Changed to sslVersions = -ssl3, tls in \etc\system\local\web.conf.
Problem solved (fingers crossed).
... View more