The search below looks up a serial number in another index, there will be multiple values to "x", but currently it only returns 1.
How do I get it to return all of the values?
Also, 2nd question...
See more...
The search below looks up a serial number in another index, there will be multiple values to "x", but currently it only returns 1.
How do I get it to return all of the values?
Also, 2nd question, as it's only returning 1 value, how does it choose which value to return?
index = email
serialnumber=123456789
| join serialnumber type=left [ search index=db | dedup Y | rename serial AS serialnumber ]
| table serialnumber X
When there is a match I want the values to be joined to the event.
But when a match isn't found i still want the original event to be found.
At the moment there are 7 events but only 3 matches a...
See more...
When there is a match I want the values to be joined to the event.
But when a match isn't found i still want the original event to be found.
At the moment there are 7 events but only 3 matches and it's not returning the other 4.
Only downside with option 1 is that when Serialnumber doesn't exist it doesn't return records.
So i onlt get records when a match is found.
Sometimes there wont be a match.
index = email SERIALNUM Subject | table SERIALNUM Subject location ipaddress racknumber
With a normal lookup, SERIALNUM would be used to match the field Serialnumber to a CSV file and "Lookup outp...
See more...
index = email SERIALNUM Subject | table SERIALNUM Subject location ipaddress racknumber
With a normal lookup, SERIALNUM would be used to match the field Serialnumber to a CSV file and "Lookup output fields" would be defined as location ipaddress racknumber
I have another index called "database" with the fields Serialnumber, location, ipaddress, racknumber
So i want to do the match from the first index email against the database index.
So I'm trying to enrich one search, by pulling fields from another index, they have a matching pair of fields Serialnumber & SERIALNUM.
How would I do this?
I am using DB connect AddOnn 3.1.3 and my results are not being enriched.
When setting up the DB Connect Lookup wizard,
the "Set Reference Search" returns the suitable fields from the Splunk i...
See more...
I am using DB connect AddOnn 3.1.3 and my results are not being enriched.
When setting up the DB Connect Lookup wizard,
the "Set Reference Search" returns the suitable fields from the Splunk index
the "Set Lookup SQL" returns the requested fields from the DB
I set up the field mapping by selecting all the right fields.
But when i run the "Preview Results" it does not enrich the data.
I've then done a packet capture and extracted the SQL query being sent to the SQL server and the last line is as follows below.
dbxlookup WHERE "serial" IN (null)
So I'm looking up a serial number, and the value should be from the Splunk search.
"serial" AS "SERIAL_NUM"
^^ So this should copy the value of SERIAL_NUM to serial and then be used in the SQL string to the server. But it's sending null by mistake.
I am running the search "index="os_var_log" | stats count" and getting this error after upgrading to Version 8 From version 6.5.5
Below is the job log, any ideas. This happens on any of my indexes...
See more...
I am running the search "index="os_var_log" | stats count" and getting this error after upgrading to Version 8 From version 6.5.5
Below is the job log, any ideas. This happens on any of my indexes big or small.
12-06-2019 12:31:38.706 ERROR SearchPhaseGenerator - Fallback to two phase search failed:std::bad_alloc
12-06-2019 12:31:38.707 ERROR SearchOrchestrator - std::bad_alloc
12-06-2019 12:31:38.707 ERROR SearchStatusEnforcer - sid:1575635498.9511 std::bad_alloc
12-06-2019 12:31:38.707 INFO SearchStatusEnforcer - State changed to FAILED due to: std::bad_alloc
12-06-2019 12:31:38.707 INFO SearchStatusEnforcer - Enforcing disk quota = 10485760000
12-06-2019 12:31:38.709 INFO DispatchStorageManager - Remote storage disabled for search artifacts.
12-06-2019 12:31:38.709 INFO DispatchManager - DispatchManager::dispatchHasFinished(id='1575635498.9511', username='admin')
12-06-2019 12:31:38.710 INFO UserManager - Unwound user context: admin -> NULL
12-06-2019 12:31:38.710 INFO UserManager - Unwound user context: admin -> NULL
12-06-2019 12:31:38.710 INFO LookupProviderFactory - Clearing out lookup shared provider map
12-06-2019 12:31:38.712 ERROR dispatchRunner - RunDispatch::runDispatchThread threw error: std::bad_alloc
This is my search I am trying to use in an event type so I can tag my events.
index = mail
| eval Subject=coalesce(Subject,subjectx)
| search
Subject = "*NVEM Battery Alert*"
But i get this ...
See more...
This is my search I am trying to use in an event type so I can tag my events.
index = mail
| eval Subject=coalesce(Subject,subjectx)
| search
Subject = "*NVEM Battery Alert*"
But i get this error? "Message: Eventtype search string cannot be a search pipeline or contain a subsearch"
How would I achieve my search without the subsearch
Is it possible include the data from the log that a fired alert was triggered off of?
So for example, our web server creates a log where someone from a bad IP address is connecting in, that trigg...
See more...
Is it possible include the data from the log that a fired alert was triggered off of?
So for example, our web server creates a log where someone from a bad IP address is connecting in, that triggers an email alert to the admin team.
Later down the road, I want to see all fired alerts and generate a report that shows the time the alert was triggered and the IP address value that came from the original web server log.
But to be clear I need this to contain the fired alerts audit log so I know I'm comparing the real log from the web server and the corresponding fired alert
The SNMP in its default source type of "snmp_ta" isn't very good and the value's in the fields are either too long with unnecessary information or missing key info. The data is very inconsistent. ...
See more...
The SNMP in its default source type of "snmp_ta" isn't very good and the value's in the fields are either too long with unnecessary information or missing key info. The data is very inconsistent.
However, if I use JSON as a source type with the correct response handler the field value pairs create far better data outputs.
However, the MIBs don't appear to be translating the OIDs into their friendly name correctly when I use JSON.
Does anyone know if this is working correctly and if it is possible for the MIB translation to work when we use JSON as the support type?
We use a transform.conf file with regex to extract the field values. However, the field name in the data input is not in human-readable format. But each value is predictable and we have a reference c...
See more...
We use a transform.conf file with regex to extract the field values. However, the field name in the data input is not in human-readable format. But each value is predictable and we have a reference csv that would allow us to correlate these data together.
uadhshuasdfiuh = Server1
xcoijcxvboijcxvb = Server2
These fields are created on the fly and there are hundreds of them. My question is how would automatically rename these fields, to be more usable in the Splunk ui?
I'm trying to build an extraction to find the uptime from this data (example below)
.1.3.6.1.4.1.789 Enterprise Specific Trap (87) Uptime: 0:27:51.35
.1.3.6.1.3.94 Enterprise Specific Trap (4) Up...
See more...
I'm trying to build an extraction to find the uptime from this data (example below)
.1.3.6.1.4.1.789 Enterprise Specific Trap (87) Uptime: 0:27:51.35
.1.3.6.1.3.94 Enterprise Specific Trap (4) Uptime: 195 days, 7:01:04.00
Can anyone help with the RegEx?
mvexpand: output will be truncated at 1600 results due to excessive memory usage. Memory threshold of 500MB has been reached.
How do I increase this to say....4GB (i have 8 in my server)?
I'...
See more...
mvexpand: output will be truncated at 1600 results due to excessive memory usage. Memory threshold of 500MB has been reached.
How do I increase this to say....4GB (i have 8 in my server)?
I've changed "max_mem_usage_mb = 5000000000" in sudo nano /opt/splunk/etc/system/local/limits.conf
but this hasn't fixed it, also i had to add this to my limits file.