All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have 3 fields: "Runtime_A", "Runtime_B", and "guid". My current query is: search | chart values(guid) AS "Guid", values(Runtime_A) AS "Total Runtime", values(Runtime_B) AS "Partial Runtime" M... See more...
I have 3 fields: "Runtime_A", "Runtime_B", and "guid". My current query is: search | chart values(guid) AS "Guid", values(Runtime_A) AS "Total Runtime", values(Runtime_B) AS "Partial Runtime" My graph is empty, and there is only one xvalue in the xaxis and its a comma seperated list of all the guids. What's wrong with my query?
I want to understand which technology is being used while CSV lookup bundle replication. Is this whole CSV gets replicated or only changes in CSV get replicated? I checked replication logs and fo... See more...
I want to understand which technology is being used while CSV lookup bundle replication. Is this whole CSV gets replicated or only changes in CSV get replicated? I checked replication logs and found that not complete CSV gets replicated so why CSV lookup replication causes SH replication issue if the size of CSV exceeds 2 GB? What is the problem with CSV lookup that it is not good for large lookup and everyone advice to move to KV store or DB connect?
I have a large set of data that comes in to splunk regularly but on couple days delay. It needs to be accelerated to be usable in our environment but I think If I wanted a 7 day datamodel I would nee... See more...
I have a large set of data that comes in to splunk regularly but on couple days delay. It needs to be accelerated to be usable in our environment but I think If I wanted a 7 day datamodel I would need some way to tell the datamodel to start accelerating at -3d and go back 7 days from there to give the data some time to get in instead of starting now and going back 7 days as data will never be in "now". Any thoughts / suggestions other than just making a summary index manually?
Create a wildcard string for a TERM match to a cidr or list of cidrs. This way you can search indexes or datamodels like this: index=myindex TERM(192.168.0.*) from the cidr 192.168.0.0/24
Hello, I'm training on splunk, I need help. I have an invoice list, extracted via this query : sourcetype="*_invoice" | where in (id,350,128,307) | table id invoice ProductType Res... See more...
Hello, I'm training on splunk, I need help. I have an invoice list, extracted via this query : sourcetype="*_invoice" | where in (id,350,128,307) | table id invoice ProductType Result : 350 261313851 phone 128 261313851 screen 307 538601320 aquarium ..... But I have to exclude invoice number 261313851 because it contains id = 350. How can I do please ? foreach and condition if ? | Foreach invoice [eval status_invoice=if(id!=350, "ok", "ko")] | where status_invoice= "ok"? Thank you in advance for your help. Regards, vita86
In Splunk I have configured the searchbnf.conf to provide some helpful search hints inline while the person types SPL. It works pretty good, with each search option coming up in green as a suggestion... See more...
In Splunk I have configured the searchbnf.conf to provide some helpful search hints inline while the person types SPL. It works pretty good, with each search option coming up in green as a suggestion. However, the suggestions always have an equal sign after the option is found. For instance, the custom search I created is runmycmd, and it has the option of trigger, then a host= parameter for the user to use in SPL. The right way to use it is like this | runmycmd trigger host=mylinuxmachine . In my searchbnf.conf I am specifying that trigger is an option top the runmycmd command, but it always wants to put an equal sign. Is there a different option here that restricts the help to just a parameter name rather than a name=value pair? [runmycmd-command] syntax = runmycmd | | shortdesc = Executes a request of your environment to bring data in. This command includes several operators. description = Uses this method to retrieve data usage = public example1 = | runmycmd trigger host=mylinuxmachine comment1 = In this example if you find a trigger that is on a host named mylinuxmachine
I have 3 extraction fields: "guid", "runtime_general", "runtime_specific" . There is also a value "A" that I will search to get the values I need. I want to overlay runtime_general and runti... See more...
I have 3 extraction fields: "guid", "runtime_general", "runtime_specific" . There is also a value "A" that I will search to get the values I need. I want to overlay runtime_general and runtime_specific (y-axis) and have the x-axis be guid . I have two pertinent log types: 1) "A: guid, runtime_general" where A is always the same (as I'm searching on A) 2) "guid, runtime_specific" How do I chart all the guids I get by searching for value A, and overlay it with both runtimes? I'm very unfamiliar with Splunk so any help would be appreciated.
Hi All, I want enable mTLS in splunk cluster on all the communication channels. I have peer certificate that works as both server and client. Enabling ssl is successful when I set requiredClien... See more...
Hi All, I want enable mTLS in splunk cluster on all the communication channels. I have peer certificate that works as both server and client. Enabling ssl is successful when I set requiredClientCert = false in web.conf. However when I make requiredClientCert = true I am getting below errors ERROR X509Verify - X509 certificate (CN=myCompanyCN) failed validation; error=19, reason="self signed certificate in certificate chain" WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client certificate B', alert_description='unknown CA'. WARN HttpListener - Socket error from 127.0.0.1:60580 while idling: error:14089086:SSL routines:ssl3_get_client_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. Here are my conf files server.conf [sslConfig] enableSplunkdSSL = true useClientSSLCompression = true sslVersions = tls1.2 serverCert = $SPLUNK_HOME/etc/auth/mycerts/peer-chain-with-key.pem <=== contains peer cert, key, intermediate certs, root CA cert in this order caCertFile = $SPLUNK_HOME/etc/auth/mycerts/ca-chain.pem sslVerifyServerCert = true requireClientCert = true web.conf # Securing splunk web enableSplunkWebSSL = true privKeyPath = etc/auth/mycerts/peer-key.pem serverCert = etc/auth/mycerts/peer-chain-cert-without-key.pem <==== contains peer cert, int certs & root CA cert in this order sslVersions = tls1.2 requireClientCert = true Any help please
Greetings, @dannyjump or anyone else that might have this app installed, Just curious whether or not there are people out there that have this installed for Splunk Enterprise on-prem 7.3.x or 8.0.... See more...
Greetings, @dannyjump or anyone else that might have this app installed, Just curious whether or not there are people out there that have this installed for Splunk Enterprise on-prem 7.3.x or 8.0.x. We've got updates planned and the last I saw in here, the app works with 7.1 and also with 7.2 per what the app developer has commented. From what I have seen of this app, I'm fairly certain a move to at least 7.3 shouldn't affect much (8.0.x may be a different story), but wanted to check. Thanks in advance!
I would like to get a count of errors that I have generated on splunk from different objects. All of them have a field error. This is my query: index="db-woodchipper" earliest=-7d@d latest=now ... See more...
I would like to get a count of errors that I have generated on splunk from different objects. All of them have a field error. This is my query: index="db-woodchipper" earliest=-7d@d latest=now \"Error\": | table *.Error Results: ![alt text][1] RAW: {"SalesforceUpdater": {"MessageBody": {"ServerName": "xxxxxx", "DbName": "xxx@xxxxx.com"}, "Error": "FATAL: database \"xxxx@xxx.xxx\" does not exist\n"}} {"EmailSettingsCorrection": {"MessageBody": {"ServerName": "xxxxxx", "DbName": "xxxxxxx"}, "Task": "EmailSettingsCorrection", "Error": "FATAL: database \"xxxxxx\" does not exist\n"}} However I would like to have something like: Operation. |Count | Count Distinct EmailSettingsCorrection | 10 | 2 SalesforceUpdater | 5 | 1 And so on....
I have a index of my gcp firewalls (all of them) and I need to take that and match it against another dataset (firewalls allowed - a CSV) and then return the query for just the information that match... See more...
I have a index of my gcp firewalls (all of them) and I need to take that and match it against another dataset (firewalls allowed - a CSV) and then return the query for just the information that matches the values in the firewalls allowed with data from gcp_firewall index Thoughts on what I need to add in the index to achieve that? example syntax below: index=gcp_firewall "data.jsonPayload.connection.src_ip"="*" | rename data.jsonPayload.connection.src_ip as Source | rename data.jsonPayload.connection.dest_ip as Destination | rename data.jsonPayload.connection.dest_port as Port | rename data.jsonPayload.instance.vm_name as Name | rename data.jsonPayload.rule_details.reference as firewall | dedup Source | table Source Name Destination Port firewall | stats count by firewall This returns EVERY firewall in GCP, when i really just want it to return ones that match the allowed firewall csv.
I have downloaded this application and the default source type which this app is using is "zyxel-fw". I want to use my own custom source type for Zyxel logs. Is it possible to do it?
Hello Guys, I would like your help. I need to monitor specifics AD Security Groups when someone is add to those groups, however, when I perform the following search using "Group_name", I have no r... See more...
Hello Guys, I would like your help. I need to monitor specifics AD Security Groups when someone is add to those groups, however, when I perform the following search using "Group_name", I have no results. index=main (EventCode=4756 OR EventCode=4728 OR EventCode=4732) Group_name:"Group_A" When I perform a search using "Account_Name" I receive the results, however, Account_Name is used not only for group name, but for user who added the user account on the group and the user who was added. I cant create a table if one columm shows 3 kind of diferents results. index=main (EventCode=4756 OR EventCode=4728 OR EventCode=4732) Account_Name=Group_A Look details below: You can notice that there are three differents values for Account_Name: Subject: Security ID: S-1-5-21-1659001184-1614895754-725345543-1010 Account Name: User who take action to add user account on the group **Account Domain: XYZ Logon ID: 0x30315A0B Member: Security ID: S-1-5-21-1659001184-1614895754-725345543-62020 Account Name: CN=UserX,OU=XYZ,OU=XYZ,OU=XYZ,OU=XYZ,DC=XYZ,DC=XYZ Group: Security ID: S-1-5-21-1659001184-1614895754-725345543-423030 Account Name: Group_A Account Domain: XYZ thx
I am trying to monitor my own windows laptop using the machine agent. 1. I have installed the machine agent for Windows from getting started. 2. Added these lines in the controller-info.xml fil... See more...
I am trying to monitor my own windows laptop using the machine agent. 1. I have installed the machine agent for Windows from getting started. 2. Added these lines in the controller-info.xml file Added the below inputs: <force-agent-registration>true</force-agent-registration> <application-name>windows-machine</application-name> <node-name>node1</node-name> <tier-name>tier1</tier-name> 3. In the file monitors\analytics-agent\conf\analytics-agent.properties, Added the below inputs, ad.controller.url=https://Controller-host/ http.event.endpoint=https://Controller-host/ http.event.name=Account_name http.event.accountName=Global_Account_Name http.event.accessKey=Access_key_from_License page 4. Started the machine agent jar file, java -jar machineagent.jar But, getting this below message and the terminal halts. Kindly assist on what I am missing.
Hello, I have a proyect where I need to integrate Entrust Identiry Guard with Splunk. Can you help me or guide on how I can make the integration? Is it necessary to hace "Entrust Intelliguard" ins... See more...
Hello, I have a proyect where I need to integrate Entrust Identiry Guard with Splunk. Can you help me or guide on how I can make the integration? Is it necessary to hace "Entrust Intelliguard" installed to make the integration?
Hi folks, I'm having a hard time with this one. Maybe I need more coffee. Say I have several events like this: EventA IP=192.168.0.22 DeviceName=192.168.0.22 EventB IP=192.168.0.22 DeviceName=Wor... See more...
Hi folks, I'm having a hard time with this one. Maybe I need more coffee. Say I have several events like this: EventA IP=192.168.0.22 DeviceName=192.168.0.22 EventB IP=192.168.0.22 DeviceName=Workstation1 EventC IP=192.168.0.33 DeviceName=192.168.0.33 EventD IP=192.168.0.33 DeviceName=Workstation2 EventE IP=192.168.0.44 DeviceName=192.168.0.44 EventF IP=192.168.0.44 DeviceName=Workstation3 EventG IP=192.168.0.44 DeviceName=Workstation4 EventH IP=192.168.0.44 DeviceName=Workstation5 EventI IP=192.168.0.55 DeviceName=192.168.0.55 EventJ IP=192.168.0.66 DeviceName=192.168.0.66 The goal in this sample would be to remove Events A, C, and E, since I have hostnames for those, but I can't simply dedup on IP because I need to keep F, G, H as they're actually different machines who happened to get the same IP address at some point over the time range of the search. Essentially, drop events where match(IP, Devicename) if there's more than one event which has that IP, and I'm stumped on getting that to work. I've done this and it actually does the trick: eventstats values(DeviceName) as DevicePerIP by IP | eval DevicesPerIP=if(mvcount(DevicesPerIP)>1, mvfilter(NOT match(DevicesPerIP, "\.")), DevicesPerIP) | mvexpand DevicesPerIP | dedup IP DevicesPerIP sortby - _time | eval DeviceName=DevicesPerIP , but the mvexpand exceeds my memory limit and I'd rather not push a limits.conf change out to the cluster if there's another way to accomplish this. Anyway, I think more coffee is in order. Any other ideas?
someone suggested a join, but as a newbie...... Don't know how to do this. I believe I would need two searches, 1 being the user activity, 2 being the list of KNOWN users from the ksv. I can do bo... See more...
someone suggested a join, but as a newbie...... Don't know how to do this. I believe I would need two searches, 1 being the user activity, 2 being the list of KNOWN users from the ksv. I can do both of those, but how to see the users without activity?
So admittedly my skills at writing Python are about the same as my skills would be at hitting a Randy Johnson fastball (yeah I'm not real successful at either, but sometimes I might get lucky...) but... See more...
So admittedly my skills at writing Python are about the same as my skills would be at hitting a Randy Johnson fastball (yeah I'm not real successful at either, but sometimes I might get lucky...) but I need to know how to do what the REST builder can do in the Event Extraction settings where the JSON path can be entered to break events properly. So basically I need to know how to do this: In here: Any help or insight is much appreciated! Thanks!
So i created a basic alert to test in SPlunk Mobile. just a stats count by host and trigger an alert to send to mobile. I do get the alert on my splunk mobile app with title and app name (search). A... See more...
So i created a basic alert to test in SPlunk Mobile. just a stats count by host and trigger an alert to send to mobile. I do get the alert on my splunk mobile app with title and app name (search). And when i click on it, I am expecting to see a table with host and count, and I receive an error saying "Alert does not contain a supported visualization Splunk Mobile". I havent selected any visualization initially. After that I created one more alert and saved it as column chart and tested again. Nothing. Anything that I am missing here?
I am running RHEL 7 server, and noticed that my splunk forwarder client is not reporting in. I am running iptables. Here are the rules that I've added: -A INPUT -p tcp -m tcp --dport 8089 -j ACC... See more...
I am running RHEL 7 server, and noticed that my splunk forwarder client is not reporting in. I am running iptables. Here are the rules that I've added: -A INPUT -p tcp -m tcp --dport 8089 -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 8089 -j ACCEPT