All Topics

Top

All Topics

I am trying to accomplish a few actions.  1. Move the stand alone server from one location to a different location. 2. Upgrade from old server w/unsupported  Splunk version to new server with the... See more...
I am trying to accomplish a few actions.  1. Move the stand alone server from one location to a different location. 2. Upgrade from old server w/unsupported  Splunk version to new server with the latest version all running in Windows. 3. Make sure the systems reporting to the old console report into the new one.  4. Eventually decommission old instance.  It was recommended to me to make the new instance the search head and connect to the old instance as a search peer, but then I started to see the following errors in the new instance:  "Problem replicating config (bundle) to search peer ' X.X.X.X:8089 ', Upload bundle="C:\Program Files\Splunk\var\run\hostname-1654707664.bundle" to peer name=hostname uri=https://X.X.X.X:8089 failed; http_status=409 http_description="Conflict". Then, I tried to . Point the Universal Forwarders to the new instance, adding the CLI commands, since no outputs.conf was in the old instance.  and got the following errors:  RED - the health status for the splunkd service.  RED - TCPOUTAutoLB-Q RED - TailReader-0 I really appreciate all the help.  Thank you,  
we are extracting fields with spaces in it using below transforms, Is there a way we can remove spaces in between fields from backend? There are 100's of fields with spaces. I tried with field alias ... See more...
we are extracting fields with spaces in it using below transforms, Is there a way we can remove spaces in between fields from backend? There are 100's of fields with spaces. I tried with field alias amd it's hard to apply for each and every field.  Transforms: [sourcetype] SOURCE_KEY=_raw REGEX=(?<field>[a-zA-Z ]+):(?<value>.+) FORMAT=$1:$2 _raw: "Process Create: Utc Time: 2022-04-28 22:08:22.025 Process Guid: {XYZ-bd56-5903-0000-0010e9d95e00} Process Id: 6228 Image: chrome.exe Command Line:  test" output I am getting: "Process Id" =  6228 Is there a way we can change this to ProcessId=6228 or Process-Id=6228 ? From UI i tried this, Can someone help me with backend trick | makeresults | eval _raw="Process Create:true Utc Time: 2022-04-28 22:08:22.025 Process Guid: {XYZ-bd56-5903-0000-0010e9d95e00} Process Id: 6228 Image: chrome.exe Command Line: test" | rex field=_raw max_match=0 "(?<field>[a-zA-Z ]+):(?<value>.+)" | rex mode=sed field=field "s/ /_/g" | eval tmp=mvzip(field,value,"=") | rename tmp as _raw | kv | table *
Hi All, I have logs which is from db_inputs/custom_script where owner not indexing custom time field as _time and they are importing all data every day without incremental.  So i need to find ass... See more...
Hi All, I have logs which is from db_inputs/custom_script where owner not indexing custom time field as _time and they are importing all data every day without incremental.  So i need to find assets which is last 7days with custom time field custom time field is last_found, 2020-07-06T17:42:29.322Z 2020-01-06T17:42:29.322Z 2020-01-05T17:42:29.322Z 2020-01-04T17:42:29.322Z from these date&time how can i search assets which is only last 7days from last_found custom time field. Please help on the query that would be great help.   Thanks!
For context, I'm creating a dashboard where a user can search activity of all hosts in an environment or one host in that same environment. Unfortunately, the naming convention used for hostnames mak... See more...
For context, I'm creating a dashboard where a user can search activity of all hosts in an environment or one host in that same environment. Unfortunately, the naming convention used for hostnames makes searching all hosts in a specific environment a bit more complicated than using a single field/value pair with a wildcard. For example, searching all non-production hosts would require a search similar to the following in my case:   index=servers host!="*prd*" AND (host="*30*" OR host="*40*")   In the dashboard, I'd like the user to be able to select a single hostname from a dropdown, or an "All Servers" option from the dropdown. With that being said, is there a way I can map all the hostnames to a single "field value" such that something like...   index=servers host=allhosts    ...would accomplish the same thing as my initial search example? This would be helpful as it would allow me to use a token for the host field when a user selects an option from the hosts dropdown.
Working on migrating from a RHEL 6 VM running splunk 8.0.5 to a RHEL 8 VM with splunk latest 8.2.6 (no clustering) Read and followed the installation and migration docs and I've been able to test wit... See more...
Working on migrating from a RHEL 6 VM running splunk 8.0.5 to a RHEL 8 VM with splunk latest 8.2.6 (no clustering) Read and followed the installation and migration docs and I've been able to test with some old data that its working. But another thing I'd like to do is optimize the indexes better as well and put them on new VM disks and distribute between hot/warm and cold/frozen,  the problem is our indexes are pretty big.  My understanding is that in order to move/migrate the indexes, I'll need to stop splunk on the old host and copy/rsync the directories over, then modify indexes on the new host and start splunk on it (of course with the required DNS pointing and forwarders reconfig to point to the new host). But the volumes I have are about 3TB, 3TB and a large one 25TB. I tested rsync with some data directories and it looks like it would take several days. For the large volume I don't think I have any other choice but to remove the disks from the old VM and attach to the new one. But even for the smaller 2 volumes it looks like it will take almost 24 hours to copy over 5-6 TB and I don't think I can keep splunk stopped for that long, it would definitely loose data.  Am I understanding this correctly or is there a better and/or quicker way to do this?  The reason I wanted to use new VM disks is because the old host has several VM disks combined to make each of the OS volumes and its just messy (e.g. the 25TB mount point has 3 underlying disks) plus with new disks I can also distribute the indexes and hot/cold buckets better between fast and not-so-fast storage. Would really appreciate if anyone can provide any suggestions/advice.
  <single> <search> <query>|query</query> <earliest>$time_token2.earliest$</earliest> <latest>$time_token2.latest$</latest> </search> <option name="drilldown">all</option> <option name="refresh.di... See more...
  <single> <search> <query>|query</query> <earliest>$time_token2.earliest$</earliest> <latest>$time_token2.latest$</latest> </search> <option name="drilldown">all</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="hide_details">$click.name$</set> <unset token="form.hide_details"></unset> </single> <input type="checkbox" token="hide_details"> <label></label> <choice value="hide">Hide Details</choice> <change> <condition value="hide"> <unset token="hide_details"></unset> </condition> </change> <delimiter> </delimiter> </input> </drilldown>  
I am trying to ingest cyberark EPM logs to splunk cloud and found doc related to it. https://docs.splunk.com/Documentation/AddOns/released/CyberArkEPM/About but it does not mention the configurat... See more...
I am trying to ingest cyberark EPM logs to splunk cloud and found doc related to it. https://docs.splunk.com/Documentation/AddOns/released/CyberArkEPM/About but it does not mention the configuration from the Cyberark side. Can someone help me with the configuration steps from the cyberark side?  The Splunk Add-on for CyberArk EPM asks for username and password which i don't know how to get the details from. Thanks in advance!!
As I understand es_notable_events is KVStore and it stores notable event information for last 48 hours/ also there is a panel in ES Audit dashboard that shows Notable Events By Owner - Last 48 Hours.... See more...
As I understand es_notable_events is KVStore and it stores notable event information for last 48 hours/ also there is a panel in ES Audit dashboard that shows Notable Events By Owner - Last 48 Hours.   Is there any way to build a similar chart for older Notables?   Thanks, Deovrat
hello team please i need solution to these question i have three column fields, startDate,endDate, ARTstartDate. i want to count the number of ARTstartDate that falls inbetween the two dates (i.e... See more...
hello team please i need solution to these question i have three column fields, startDate,endDate, ARTstartDate. i want to count the number of ARTstartDate that falls inbetween the two dates (i.e from startDate to endDate)  
Hey everyone and I hope your having a great day! I have configured a custom field extraction in the Splunk search app for my sourcetype but I don't have the possibility to share them with other user... See more...
Hey everyone and I hope your having a great day! I have configured a custom field extraction in the Splunk search app for my sourcetype but I don't have the possibility to share them with other users like I can do with another Splunk instance where I have the role Power (With Power role, I can share it no problem). I don't want to assign myself the Power role since it's broad and wouldn't follow the rule of least privilege. For this reason which permission would I need to assign myself in order to be able to share my data extraction with other users?
hi,  it is possible transforms this table column a column b column c column d column e column f column g aaa bbb ccc ddd eee fff ggg to  column... See more...
hi,  it is possible transforms this table column a column b column c column d column e column f column g aaa bbb ccc ddd eee fff ggg to  column a column b column c column d name value aaa bbb ccc ddd column e eee aaa bbb ccc ddd column f fff aaa bbb ccc ddd column g ggg   Simone
Splunk Connect for Zoom (Version 1.0.1, April 23, 2020, https://splunkbase.splunk.com/app/4961/) Disable the use of TLSv1.0 protocol in favor of a cryptographically stronger protocol such as TLSv1.2 ... See more...
Splunk Connect for Zoom (Version 1.0.1, April 23, 2020, https://splunkbase.splunk.com/app/4961/) Disable the use of TLSv1.0 protocol in favor of a cryptographically stronger protocol such as TLSv1.2 Finding from Qualys QID-38628 on tcp port 4443 The following openssl commands can be used to do a manual test: openssl s_client -connect ip:port -tls1 If the test is successful, then the target support TLSv1
Can the deployment server be used to deploy apps/configuration to a search head and also transforms/props/configuration/apps to indexers? To date I'm only using it to delivery apps to universal forwa... See more...
Can the deployment server be used to deploy apps/configuration to a search head and also transforms/props/configuration/apps to indexers? To date I'm only using it to delivery apps to universal forwarders.   Note: using Splunk Enterprise 8.2.5 on Windows. 
  I need a list of only those jobName which start with letter a though m - anycase. The below does not work index=log-13120-nonprod-c laas_appId=qbmp.prediction-engine sourcetype="qbmp.predicti... See more...
  I need a list of only those jobName which start with letter a though m - anycase. The below does not work index=log-13120-nonprod-c laas_appId=qbmp.prediction-engine sourcetype="qbmp.prediction-engine:app" "predicted as Prediction" | table jobName | dedup jobName | where jobName like "[a-m]%"     sample event is like this below     08-06-2022 10:19:36.990 [task-53] INFO c.m.b.p.service.PredictionWorkerV2#run - predictionId=1e5a96c6-5f90-4bf9-b0df-7f3528ae642b, threadId=23, job=SRW-REPAPER-LoadedStatus^QNA predicted as Prediction{predictionId='1e5a96c6-5f90-4bf9-b0df-7f3528ae642b', jobName='SRW-REPAPER-LoadedStatus', instance='QNA', predictionStatus='cant_predict', predictedStartTime=-1, predictedFinishTime=-1, predictionExplanation='no_jobstats', predictedAt=1654697976}     The above event has jobName='SRW-REPAPER-LoadedStatus' and it does not start with a letter from a through m. So it should not be displayed.
What is the is the best approach to creating a field that shows the number of incomplete requests in a given period of time?   For the machine in question, events are logged when it completes the R... See more...
What is the is the best approach to creating a field that shows the number of incomplete requests in a given period of time?   For the machine in question, events are logged when it completes the Request-Response Loop.    I have a field `time_taken` which shows, in milliseconds, how long the Request-Response Loop has taken.  I have already done the following, now how do I evaluate the total number of `open_requests`  for each second?   | eval responded = _time | eval requested = _time - time_taken | eval responded = strftime(responded ,"%Y/%m/%d %H:%M:%S") | eval requested = strftime(requested ,"%Y/%m/%d %H:%M:%S") | eval open_requests = ??? | table _time open_requests | sort - _time    
Hi SMEs, What should be the MemoryLimit value for *nix UF max? I think usually UF takes max to 5% of the memory for normal input config and processing however customer don't want to be messed up in... See more...
Hi SMEs, What should be the MemoryLimit value for *nix UF max? I think usually UF takes max to 5% of the memory for normal input config and processing however customer don't want to be messed up in worst scenario where UF is eating all memory. Thanks
Hi Team,   I would like to retrieve following info through Splunk search    1. List of all splunk searches performed on a single index along with the user list along with timestamp of search ... See more...
Hi Team,   I would like to retrieve following info through Splunk search    1. List of all splunk searches performed on a single index along with the user list along with timestamp of search performed for a given period ( 1 month or 1 year )    
Hi Team,   Is there any way to pull last 1000 searches performed on a particular index along with the user who performed that search 1. Splunk Query i am looking for    2. Rest Query i am l... See more...
Hi Team,   Is there any way to pull last 1000 searches performed on a particular index along with the user who performed that search 1. Splunk Query i am looking for    2. Rest Query i am looking for    I am running on Splunk 8.0 
Hi, I have a few queries regarding data ingestion from a .csv file. I am interested in knowing the following: 1. What is the most optimal way to bring the data from a .csv file into Splunk? 2. ... See more...
Hi, I have a few queries regarding data ingestion from a .csv file. I am interested in knowing the following: 1. What is the most optimal way to bring the data from a .csv file into Splunk? 2. Are there any pre-requisites to be satisfied before indexing a .csv file? 3. Are there any limitations in indexing data from a .csv file? 4. Are there any restrictions in indexing data from a .csv file (maximum file size allowed, maximum rows or maximum columns that can be placed in a single .csv file, maximum number of the file allowed, etc.) 5. Is there any Splunk documentation available about this requirement? If so, please share the link for the same. Thanks much!
Can anyone help me with the resources to prepare for Admin Certification Exam? Or if someone can share how he/she prepared without taking the Splunk Recommended Courses. The recommended - Data and ... See more...
Can anyone help me with the resources to prepare for Admin Certification Exam? Or if someone can share how he/she prepared without taking the Splunk Recommended Courses. The recommended - Data and System Administration courses are way too costly for me.