All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Greetings! We are trying to generate a table after we got output from a Splunk query. We are trying pipe (|) this to our query but do not know how to do this. Can someone assist?  This is the outpu... See more...
Greetings! We are trying to generate a table after we got output from a Splunk query. We are trying pipe (|) this to our query but do not know how to do this. Can someone assist?  This is the output after we ran our Splunk query, Feb 13 20:36:21 hostname1 sshd[100607]: pam_unix(sshd:session): session opened for user user123 by (uid=0) Feb 13 20:36:23 hostname2 sshd[100608]: pam_unix(sshd:session): session opened for user user345 by (uid=0) We want to capture the table in this form, Time                                   Hosts                       Users Feb 13 20:36:21       hostname1                user123 Feb 13 20:36:23       hostname2                user345 And so on.. How do we do this. Thank you in advance!
I need some help updating the mmdb file for the iplocation command. Ive read the other forum questions regarding this, as well as the docs, and i am a bit confused.    I initially uploaded the new ... See more...
I need some help updating the mmdb file for the iplocation command. Ive read the other forum questions regarding this, as well as the docs, and i am a bit confused.    I initially uploaded the new mmdb file from MaxMind, the GeoLite2-City.mmdb. I uploaded it through the GeoIP panel through the lookups tab.    It uploads, but i cant seem to find the file afterwards. I am looking on the specific server that I uploaded the file to, we have a clustered environment, but that one specific server I uploaded it to should have it. I ran locate and find commands, but could not locate it. We still have the original under $SPLUNK_HOME$/share/dbip-city-lite.mmdb   Even though the dropbox for the mmdb file showed a successful upload, I can not find it anywhere.  I dont see any trace of the upload through splunkd, or through /export/opt/splunk/var/run/splunk/upload/ , or through any find or locate command.  I wanted to update the file path to include both databases, and i know i needed to change the limits.conf file, and update it to include both paths. But the question is, How do i change the limits.conf so that it replicates. We dont have any app named TA-geoisp or anything similar, and thats what these forums and docs reference.   Somewhere I saw that I could update the search app's limits.conf and just push that from the shcluster directory, as that will push a bundle change that will push out to all Search heads in the cluster. Since the search app is the default app, we could just use that app to point to the mmdb files. But we don't have the search app located under our /$SPLUNK_HOME$/etc/shcluster/apps/   We dont seem to have the search app under our Clustermaster/Deployer shcluster directory. I think i might be missing something. I would basically just like to update the limits.conf to point to the new dir path of both of the mmdb files. Id like to just edit the limits.conf to look like:     [iplocation] MMDBPaths = /path/to/your/GeoIP2-City.mmdb,/path/to/your/dbip-city-lite.mmdb       The question im trying to ask here, is when i upload the file through the gui, where does the file end up. And if i wanted to push these changes manually,  if i wanted to push to all SH and indexers from the deployer and deployment server, how do i go about replicating the folder that holds the mmdb as well as the limits.conf that hold the paths to the files.    Thank you for any assistance.   
Correct. This is applicable for 9.1.0 and above.
  Still trying to only get the Russian IPs. Still pulls the private IPs.      
I am relatively new to the Splunk coding space so bare with me in regards to my inquiry. Currently I am trying to create a table, each row would have the _time, host, and a unique field extracted fr... See more...
I am relatively new to the Splunk coding space so bare with me in regards to my inquiry. Currently I am trying to create a table, each row would have the _time, host, and a unique field extracted from the entry: _Time   Host                         Field-Type       Field-Value 00:00    Unique_Host_1   F_Type_1        F_Type_1_Value 00:00    Unique_Host_1   F_Type_2        F_Type_2_Value 00:00    Unique_Host_1   F_Type_3        F_Type_3_Value 00:00    Unique_Host_2   F_Type_1        F_Type_1_Value 00:00    Unique_Host_2   F_Type_2        F_Type_2_Value 00:00    Unique_Host_2   F_Type_3        F_Type_3_Value .. The data given for each server: Field-Type=F_Type_1,.....,Section=F_Type_1_Value Field-Type=F_Type_2,.....,Section=F_Type_2_Value Filed-Type=F_Type_3,.....,Section=F_Type_3_Value  I have created 3 field extractions for F-Type Values: (.|\n)*?\bF_Type_1.*?\b Section=(?<F_Type_1_Value>-?\d+) This is what I have done so far for the table: index="nothing" source-type="nothing" | first( F_Type_1) by host I am not sure this is the best approach, and I can also refine the field extraction if needed. Generally, my thought process follows: Source | Obtain first entries for all the hosts | Extract fields values | Create table But I am currently hitting a road block in the syntax to create rows for each of the unique Field-Types and their value.   
Hello! I am trying to send data to Splunk using UDP, I tried to set it up using the documentation and seen a few videos on how to set it up but can't get it right. I have the data coming into my HF ... See more...
Hello! I am trying to send data to Splunk using UDP, I tried to set it up using the documentation and seen a few videos on how to set it up but can't get it right. I have the data coming into my HF from network devices and then should be sent to my indexers. After going through the set up I get this error message "Search peer splunk_indexer_02 has the following message: Received event for unconfigured/disabled/deleted index=<index> with source="source::udp:514" host="host::xx.xx.xx.xx" sourcetype="sourcetype::<sourcetype>. So far received events from 2 missing index(es)." I created a new index during the set up but there is no data to search.
Tried below query, but not getting values by site index=abc mal_code=xyz TERM(application) OR (TERM(status) TERM(success)) NOT (TERM(unauthorized) TERM(time) TERM(mostly)) site=* |stats count by Sro... See more...
Tried below query, but not getting values by site index=abc mal_code=xyz TERM(application) OR (TERM(status) TERM(success)) NOT (TERM(unauthorized) TERM(time) TERM(mostly)) site=* |stats count by Srock site |stats sum(count) as Success |appendcols [search index=abc mal_code=xyz (TERM(unauthorized) TERM(time) TERM(mostly)) NOT (TERM(status) TERM(success)) site=* |stats count by ID site |fields ID site |eval matchfield=ID |join matchfield [search index=abc mal_code=xyz site=* "application" |stats count by Srock site |fields Srock site |eval matchfield=Srock] |stats count(matchfiled) as Failed] |eval Total=Success+Failed |eval SuccessRate=round(Success/Total*100,2) |table *
Two independent search heads may have separate sets of field extractions defined, which would explain why you don't see the same fields on each SH.  Use btool on each SH to view and compare the props... See more...
Two independent search heads may have separate sets of field extractions defined, which would explain why you don't see the same fields on each SH.  Use btool on each SH to view and compare the props and transforms settings. If your query references a data model then that data model is used in your search; otherwise, no data model is used.
The eval command merely assigns a value to a field (variable).  It has no effect on the time picker. What *does* override the time picker are the earliest and latest options in the search command.
Hello, Your code moved the panel to the left when you set it to 40%, but not the text. I showed grey area on the right side.  Tried to use 100%. It turns out the text-align: left  didn't do anythi... See more...
Hello, Your code moved the panel to the left when you set it to 40%, but not the text. I showed grey area on the right side.  Tried to use 100%. It turns out the text-align: left  didn't do anything     Thanks I would like to align the text like the following:      
Hi, I am working my way through some of the splunk courses. I am currently on "working with time". In one of the videos the following command is used to find all results within the past day, roundi... See more...
Hi, I am working my way through some of the splunk courses. I am currently on "working with time". In one of the videos the following command is used to find all results within the past day, rounding down. "| eval yesterday = relative_time(now(),"1d@h")". However when I attempt this command myself, it simply prints the "yesterday" value however it uses the time specified in my time picker, not in the actual command. I was under the impression that any time specified within a command would automatically overwrite the time picker. Was I mistaken in this? Or am I perhaps using the command incorrectly? Any help would be greatly appreicated.
The custom app will need a different sourcetype name if the sourcetype is to have different settings from the official app.  That means you also will need to change the input to use the custom source... See more...
The custom app will need a different sourcetype name if the sourcetype is to have different settings from the official app.  That means you also will need to change the input to use the custom sourcetype.
My company is transitioning from an on-premise MFA setup within ADFS to the Azure MFA setup.  What's the best approach to getting those MFA events into Splunk?  Does the Splunk Addon for Microsoft Az... See more...
My company is transitioning from an on-premise MFA setup within ADFS to the Azure MFA setup.  What's the best approach to getting those MFA events into Splunk?  Does the Splunk Addon for Microsoft Azure (splunkbase 3757) meet that goal?  
Been struggling for a while on this one. On-prem Splunk Enterprise.  v9.1.2, running on CentOS 7.9 -- Just trying to find a consistent way to be able to upload log files through HTTP Event Collect... See more...
Been struggling for a while on this one. On-prem Splunk Enterprise.  v9.1.2, running on CentOS 7.9 -- Just trying to find a consistent way to be able to upload log files through HTTP Event Collector (HEC) tokens.  I found the whole RAW vs JSON thing confusing at first and thought the only way to be able to specify/override values like host, sourcetype, etc. was to package up my log file in the JSON format. Discovered today that you can specify those values in the RAW url, like so: https://mysplunkinstance.com:8088/services/collector/raw?host=myserver&sourcetype=linux_server which was encouraging.  It seemed to work. And I think I've gotten further ahead.  I now have this effectively, as my curl command running in a bash script: curl -k https://mysplunkinstance.com:8088/services/collector/raw?host=myserver&sourcetype=linux_server -H "Authorization: Splunk <hec_token>" -H "Content-type: plain/text" -X 'POST' -d "@${file}" Happy to report that I now see the log data. However, it only seems happy if its a single line log.  When I give it a log file with more lines, it just jumbles it all together.  I thought it would honour the configuration rules we have programmed for sourcetype=linux_secure (from community add-ons and our own updates) but it doesn't.  Loading the same file through Settings -> Add Data has no problem properly line-breaking per the configuration. I'm guessing there is something I am missing then in how one is meant to send RAW log files through HEC?
Anyone know how and what path to query on splunkcloud instance to pull existing SAML configuration details and certificate? I can view the information by browsing to settings -> authentication metho... See more...
Anyone know how and what path to query on splunkcloud instance to pull existing SAML configuration details and certificate? I can view the information by browsing to settings -> authentication method -> SAML -> SAML configuration. I want to be able to export that information if it is captured in a file as a backup prior to migrating to different authentication method.  Thanks in advance.  
We see this exact issue and it started after upgrading to 9.2.0.1. Suppressing the warning works as expected but was curious if you found this specific to 9.2 we are upgrading from 9.0.5 so it may ha... See more...
We see this exact issue and it started after upgrading to 9.2.0.1. Suppressing the warning works as expected but was curious if you found this specific to 9.2 we are upgrading from 9.0.5 so it may have been introduced in 9.1 as well.
Correction, I need to re-pin them in reverse order, as the most recently pinned app goes to the top.
Hi based on your screenshots it’s just like you said and docs told this wrongly. You should leave comment/ corrections on that doc page. They are happy to get feedback and will correct this sooner o... See more...
Hi based on your screenshots it’s just like you said and docs told this wrongly. You should leave comment/ corrections on that doc page. They are happy to get feedback and will correct this sooner or later.  On Linux that user is splunkfwd as docs told. r. Ismo
Now I notice that all of the apps that existed prior to the upgrade are all already pinned.   The only unpinned apps are the few we have added since then.  Presumably that means I can just unpin ever... See more...
Now I notice that all of the apps that existed prior to the upgrade are all already pinned.   The only unpinned apps are the few we have added since then.  Presumably that means I can just unpin everything and the re-pin in the order I want.   I miss the dragging.
Hi If you are needing silent installation on macOS then probably it’s better to use tar.gz package? r. Ismo