Activity Feed
- Posted Can accelerated data models work with summarized data? on Getting Data In. 04-11-2023 11:46 AM
- Got Karma for Re: Visualizing output from top in a bar chart. 03-10-2022 03:26 AM
- Posted Substituting key values on raw text on Splunk Search. 02-01-2022 04:22 PM
- Got Karma for sendCookedData=false causing message rejects. 02-13-2021 11:09 AM
- Got Karma for Re: tstats summariesonly=t gets no results on accelerated data models. 09-25-2020 04:44 AM
- Got Karma for tstats summariesonly=t gets no results on accelerated data models. 09-25-2020 04:42 AM
- Got Karma for Best practices for Splunking mobile endpoints. 06-05-2020 12:50 AM
- Karma Re: Why am I encountering a bug when accessing nested JSON field values? for jtacy. 06-05-2020 12:49 AM
- Karma Re: sendCookedData=false causing message rejects for MuS. 06-05-2020 12:49 AM
- Karma Re: Why are the IP fields in the CIM data models defined as strings not IPv4 fields? for rpille_splunk. 06-05-2020 12:49 AM
- Got Karma for Re: Does Splunk support 3rd party certificates without passwords?. 06-05-2020 12:49 AM
- Got Karma for tstats summariesonly=t gets no results on accelerated data models. 06-05-2020 12:49 AM
- Got Karma for Re: tstats summariesonly=t gets no results on accelerated data models. 06-05-2020 12:49 AM
- Got Karma for Splunkweb Self-Signed SSL certificates not working with Chrome. 06-05-2020 12:49 AM
- Got Karma for Splunkweb Self-Signed SSL certificates not working with Chrome. 06-05-2020 12:49 AM
- Got Karma for Splunkweb Self-Signed SSL certificates not working with Chrome. 06-05-2020 12:49 AM
- Got Karma for Re: Does Splunk support 3rd party certificates without passwords?. 06-05-2020 12:49 AM
- Got Karma for Re: Does Splunk support 3rd party certificates without passwords?. 06-05-2020 12:49 AM
- Got Karma for ES Asset and Identity lookups only support a single pipe delimited field...???. 06-05-2020 12:49 AM
- Karma Re: How to filter search results by most recent timestamp by host for woodcock. 06-05-2020 12:48 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
04-11-2023
11:46 AM
Can accelerated data models handle pre-summarized data accurately? Take authentication messages for example. Most apps and operating systems will send one message per authentication attempt. But I've seen some solutions that will send a digest every five minutes or whatever showing the count of success/failed messages by user. Are there settings that control how accelerated data models calculate the "count" if the messages contain a "count=X" field in it? And if so, can an accelerated data model handle a blend of raw and summarized events?
... View more
Labels
- Labels:
-
data
02-01-2022
04:22 PM
Let's say I have a CSV input with the following columns: _raw,user,src_ip The _raw event is: "Accepted public key for user $user$ from $src_ip$" Is there a way to replace $user$ and $src_ip$ in _raw with the values of the corresponding fields? I tried using "foreach" and "rex" in sedcmd mode, but it doesn't look like rex understands <<FIELD>> and '<<FIELD>>'. Is there another way to do this?
... View more
Labels
- Labels:
-
rex
07-15-2019
01:09 PM
Upon further investigation, it would appear that the events get broken when systemctl outputs the value of the "load" column as "not-found".
Ubuntu 18.04.
... View more
07-15-2019
01:00 PM
This is roughly what most of my Unix:Service events look like:
Mon Jul 15 12:33:46 PDT 2019 type=systemctl UNIT=wpa_supplicant.service, LOADED=loaded, ACTIVE=active, SUB=running, DESCRIPTION=WPA supplicant
But I'm seeing a fair number of events like this one:
Mon Jul 15 12:33:46 PDT 2019 type=systemctl UNIT=\x97\x8F, LOADED=systemd-vconsole-setup.service, ACTIVE=not-found, SUB=inactive, DESCRIPTION= systemd-vconsole-setup.service
It appears that there was some kind of data after the "type=systemctl" that has pushed the field values into the next field in line.
Anyone have any idea how to fix this?
Splunk... will you please stop spending development time on stupid things like Apple Watch apps and fix the Linux TA? It is severely broken and it is one of the most downloaded apps in the Splunkbase.
... View more
07-08-2019
03:54 PM
And if so, has anyone done it?
From AWS blog:
"ACM Private CA offers a secure, managed infrastructure to support the issuance and revocation of private digital certificates. It supports RSA and ECDSA key types for CA keys used for the creation of new certificates, as well as certificate revocation lists (CRLs) to inform clients when a certificate should no longer be trusted. Currently, ACM Private CA does not offer root CA support."
I would imagine the "does not offer root CA support" would mean it can't be used with Splunk, right?
... View more
- Tags:
- splunk-enterprise
05-29-2019
01:31 PM
1 Karma
I'd just like to hear from anyone who has implemented a successful strategy for Splunking mobile endpoints (Windows, OSX, Linux...).
It would seem to me that since accelerated data models run every five minutes by default, capturing events from mobile endpoints would need to happen in <5 min window. So, having a Universal Forwarder that can only forward data when on a VPN, or physically connected to the corporate network seems like a losing strategy.
This would mean having some kind of intermediate forwarder and deployment server available to the Internet.
Has anyone tried this? What kind of security controls and tools did you use to protect the Internet-facing Splunk infrastructure?
Thanks.
C
... View more
Labels
- Labels:
-
universal forwarder
02-25-2019
10:33 AM
I have the following eval statement:
| eval aaa=case(
action=="opened","success",
action=="closed","success",
action=="succeeded","success",
action=="failed","failure",
action=="Accepted","success",
action=="Invalid","failure",
searchmatch("error trying to bind as user"),"failure",
action=="new user","created",
action=="new group","created",
action=="add" AND app=="usermod","modified",
action=="removed" AND app="gpasswd","modified",
app=="usermodd" AND action=="change","modified",
app=="usermod" AND action=="lock","modified",
searchmatch("setting system clock"),"success",
action=="clock_sync","success",
app=="chage" and action=="changed","modified",
app=="aide" AND action="created","added",
app=="aide" AND action=="changed","modified",
app=="aide" AND action=="removed","deleted",
app=="ip route" AND action=="add","added",
searchmatch("changed password expiry"),"modified",
searchmatch("ip route add"),"added",
searchmatch("ip route del"),"deleted",
searchmatch("ip route replace"),"modified",
useradd_action=="new user" OR useradd_action=="new group","added",
action=="Up" OR action=="up","modified",
action=="Down" OR action=="down","modified")
If I use that statement in the search pipeline, it works. If I define it in an EVAL- statement in props.conf , it breaks completely. If I remove the searchmatch() statements, it works.
Is searchmatch() not supported in props.conf ? If not, is there a workaround? I tried things like: _raw=="*my text*" and that didn't work either.
I understand searchmatch() is an alias for the match() statement. I tried using match() as well and that doesn't work either.
Any ideas?
... View more
11-09-2018
01:40 PM
I tried that and it didn't work. When I look at the permissions for the lookup table and the automatic lookup, they are all set to Global...
... View more
11-09-2018
12:55 PM
When I run a SELECT on a SQLite table, some of the columns that have dates in them come back with an error:
failed to load column with type DATE
When I use sqlite3 to read that table, this is what I see in the date fields:
2018-10-12T18:09:00.887-04:00
2018-10-12T18:09:01.353-04:00
2018-10-12T18:09:01.620-04:00
2018-10-12T18:09:01.933-04:00
2018-10-12T18:09:02.370-04:00
2018-10-12T18:09:02.760-04:00
2018-10-12T18:09:03.073-04:00
Those are clearly dates, but I'm not sure if what I think is the trailing offset at the end is messing Splunk up. How do I tell DB Connect how to parse this date correctly?
... View more
11-02-2018
10:25 AM
I'm seeing some really weird behavior.
If I run | metadata type=sourcetypes index=XYZ, I see the sourcetype I'm looking for and the recent and latest times are within the last hour.
If I search on index=XYZ sourcetype=ABC over all time or over "before 1/1/2020", I get zero results.
If I search on index=XYZ host=10.*, I see events and the sourcetype values is the value that I'm searching for.
This is only happening to one sourcetype. Another twenty or so work fine, including other sourcetypes in the same index.
Anyone see this kind of thing before?
... View more
- Tags:
- splunk-enterprise
10-09-2018
11:35 AM
I experienced a similar problem. When I tried to search two datamodels, I got the error that the macro was missing. It turned out that there was a tags.conf file that applied the tags for the two different datamodels, but there was no corresponding eventtypes.conf file that was applying those tags. When I removed the tags.conf file, the problem went away.
... View more
09-25-2018
04:33 PM
I'm not trying to search the data model, I am trying to feed data into it. I want to run something like this:
sourcetype=qualys earliest=-30d@d | eventstats max(_time) AS last_scan by dest | where _time=last_scan
That will give me the most recent scan of all hosts over the last 30 days. I want that in a data model since tscollect and namespaces aren't supported on search head clusters.
... View more
09-25-2018
03:13 PM
The search needs to use something like eventstats to find the most recent timestamp for the events, in this case the last time a destination IP was scanned by a vulnerability scanner. But searches with pipes aren't supported by data models.
So, I could put the most recent scan time for each IP in a lookup table and create an automatic lookup for it. That way I could have a top level search that says _time=last_scan_time if I can get the last_scan_time to be treated as a field like with WHERE rather than a string...
... View more
09-25-2018
02:08 PM
I'm trying to work around the limitations of data model root searches not supporting pipes.
Is there any way to do see if fieldX=fieldY at the root search level or does Splunk always treat the "fieldY" as a string?
... View more
- Tags:
- splunk-enterprise
09-12-2018
12:53 PM
I have a customer with a nightmare syslog server environment -- different sourcetypes in different log files on different syslog servers, shared unqualified hostnames used in different data centers, some logs have FQDNs, some don't, etc.
My understanding is that the order of precedence for TRANSFORMS is source:: overwrites both sourcetype and host:: stanzas. host:: overwrites sourcetype stanzas.
So... I have TRANSFORMS stanzas applied to each source:: stanza to put the appropriate data into the correct sourcetype. I then apply index and host metadata TRANSFORMS to each of the sourcetype stanzas.
But for some reason, the host and index TRANSFORMS don't seem to get applied once an event has had a TRANSFORM applied in a source:: stanza. Is that expected behavior or are there limitations to metadata rewrites that they must occur only on the stanza with the highest precedence for a particular event?
... View more
09-12-2018
10:43 AM
I followed the instructions for setting up the monitoring console in distributed mode. I have added the cluster master, search heads, and deployment servers as search peers.
The monitoring console can see the cluster master and identify the number of buckets, amount of data, CPU utilization, etc. But none of the index cluster members show up.
It is a multi-site cluster with two sites. Does the monitoring console need to be in site0? Any other ideas on what might be causing this issue?
Thx.
... View more
08-23-2018
03:46 PM
One option is obviously to use shared storage. That's a least desirable option.
If I schedule the search to run tscollect, it will be run on a random search head in the cluster, right?
So another option would be to save the search to all cluster members and then use a cron job on each one to run the search and generate the namespace's tsidx files...?
Is there any other cleaner way to do this?
... View more
08-23-2018
03:00 PM
Is it possible to delete the contents of a namespace in a clustered environment from the search pipeline or a settings menu somewhere? Or do they need to be deleted by hand on each indexer?
... View more
08-09-2018
11:32 AM
1 Karma
Here is the link to the documentation page for the ES Asset and Identities lookups:
http://docs.splunk.com/Documentation/ES/5.1.0/Admin/Formatassetoridentitylist#Asset_lookup_header
It states for the ip, mac, nt_host, and dns fields:
"A pipe-delimited list of single IP address or IP ranges. An asset is required to have an entry in the ip, mac, nt_host, or dns fields. Do not use pipe-delimiting for more than one of these fields per asset.
So... if I can only use a pipe delimited field for one of those fields, how am I supposed to track assets that have multiple NICs and thus multiple ips and multiple MAC addresses?
What happens if two fields are defined with pipe delimited values?
... View more
08-01-2018
11:19 AM
Thanks, MuS... It's not a danger zone I wanted to wade into. I have access to my customer's cluster and intermediate forwarders, but not the UFs. Looks like the smart move is to push those transforms to the UFs and do it there...
... View more
07-31-2018
11:33 AM
1 Karma
I have a customer where the Splunk team does not have management access to forwarders and the ops people won't allow agents to be managed by a deployment server. Their data is kind of messy and requires a number of sourcetype and host metadata rewrites.
Since pushing out any changes to the forwarders is a slow, time consuming process, it makes sense to put the metadata rewrites and routing logic on the indexers. This would require that UFs and intermediate forwarders have sendCookedData=false in their outputs.conf file.
When I enabled that setting on the UFs and intermediate forwarders, data stopped flowing in and I saw a ton of the following messages:
07-31-2018 17:47:02.429 +0000 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1249209376 bytes from src=10.192.1.7:64398 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.
I saw this on both the intermediate forwarders and the indexers. I looked for "67108864" in the default limits.conf, but couldn't find anything.
Anyone know how to disable cooked data without triggering this message?
Thx.
C
... View more
07-27-2018
12:49 PM
Since the recommended best practice is for ES to run on its own cluster, I have several ES customers that run both an ES and non-ES search head cluster.
Is there a recommended method for keeping configs that are applicable on both clusters in synch? When users create content that gets saved in the "local" directory for the app, what is the best way to get those changes into the other cluster? Can I just copy that directory to the other search head cluster and have it synchronize? Or does the search cluster only recognize that new content if it is created by a user in the GUI?
If I take the local content and push it out with the search deployer, it will end up in the default directory on the search heads rather than the local directory. Can the local directories then be deleted or are those changes stored in the Raft repository and will then get reapplied?
Thx.
C
... View more
07-25-2018
12:22 PM
I'm trying to rewrite the host field based upon values in my data. Here is a sample event:
{"href":"/orgs/1/audit_log_events/2c2b3a2d-eda4-4391-adcc-8feb309da406","event_type":"proc_stopped","severity":"info","timestamp":"2018-07-25T19:13:50+00:00","created_by":{"server":{"hostname":"#######","href":"/orgs/1/agents/5654"}},"data":{"reporter_proc":"AgentMonitor","reporter_pid":4150,"stopped_proc":"AgentMonitor"}}
Here is my transforms.conf:
[illumio_host_rewrite]
REGEX = created_by\":{\"server\":{\"hostname\":\"([^\"]+)
FORMAT = host::$1
DEST_KEY = MetaData:Host
Here is my props.conf:
[illumio:audit]
TRANSFORMS-illumio_host_rewrite = illumio_host_rewrite
I verified that the regex worked using the "regex" search command. I pushed it out to the indexing tier and the host value hasn't changed. I pushed it out to the system that connects to the Illumio API with the exact same config, but that didn't work either. Except on the collector machine, I'm seeing an error message that says:
07-25-2018 12:12:41.582 -0700 ERROR regexExtractionProcessor - REGEX field must be specified tranform_name=illumio_host_rewrite
I'm not seeing that error on the indexers though. Both hosts are running Oracle Linux.
Any ideas why that isn't working...?
... View more
07-20-2018
11:32 AM
I'm seeing two different errors with the Qualys TA. When I try and setup the app, I see the following:
Encountered the following error while trying to update: Error while posting to url=/servicesNS/nobody/TA-QualysCloudPlatform/qualys/qualys_configure/setupentity
When I look in the qualys.conf, I can see the password in clear text rather than being encrypted...
I'm also seeing this error message when running the TA on a search head cluster:
TA-QualysCloudPlatform: 2018-07-20 18:19:05 PID=23510 [MainThread] ERROR: TA-QualysCloudPlatform - This setup is configured as Search Head. You should not run %s on Search Head. I am Exiting.
Why can't the TA run on a search head? That makes no sense considering that all of the searches in the Qualys app depend on the knowledge base lookup table. Do I have to run this on a separate box, have it input the knowledge base and then summary index the results so the search heads can build the KB?
... View more
07-17-2018
02:11 PM
I should also add that when I ran | datamodel Certificates search, the dest field is populating properly in that datamodel.
Neither datamodel is accelerated yet.
... View more