@Susha you can use a rest search to get list of saved searches that includes all your alerts & reports. Use the below search to get the disabled alerts & reports. | rest /servicesNS/-/-/saved/searches splunk_server="local"
| search disabled=1
| table title author eai:acl.app updated Similarly you can use another rest search to get all the views | rest /servicesNS/-/-/data/ui/views splunk_server="local"
| table title author eai:acl.app updated -- Hope this helps
... View more
@SplunkDash How are you onboarding this data? If you are onboarding this data from a remote server using UF, you should place the props.conf on the remote server to extract the fields. Its always better to test the extraction using UI i.e. settings -> Add Data -> upload -> choose psv as sourcetype. Once you are ok with extraction, copy the parameters deploy it over to the UF. If this psv file does not has a header file, you need to mention the fields as well in the props.conf. -- Hope this helps.
... View more
@Kamaal_Mohammed How did you configure ssl cert on this host? Is it using default cert OR CA-signed cert (internal OR external). you can find this by using the btool command /opt/splunk/bin/splunk btool inputs list http --debug | grep serverCert You need to copy over the root cert from the above path to the source from which you are doing this post request. -- Hope this helps
... View more
@jokovitch you can use the below eval command for this task eval Country = if(Fname="fname1", "UK", Fname), Phone= case(Fname="fname1", "123") -- Hope this helps
... View more
@pbarbuto Please use the below search. you should be able to see the reached limit messages for both user & role assuming someone is hitting the limit. index=_internal sourcetype=splunkd component=DispatchManager -- Hope this helps.
... View more
@jcorcoran508 just curious whether the new cloned group has members? The AD Group would only show up if it has members in it. If yes, please reload authentication (settings -> Authentication Methods -> Reload Authentication Configuration) and check. -- Hope this helps.
... View more
@AishwaryaDevi If I understand your problem correctly, you want to search login data in splunk for the users in your spread sheet. If not pls elaborate your use case
... View more
@nisha_sh Please follow the splunk doc to complete the setup. https://docs.splunk.com/Documentation/AddOns/released/ImpervaWAF/Setup -- Hope this helps
... View more
@timsheets13 Couple of questions 1. Have you created "mySplunkWebCert.csr" mentioned in the command? 2. From which directory, you are executing this openssl command? the certs will be stored in /opt/splunk/etc/auth
... View more
@ashwath4k You can create a scripted input from UI. I assume you have UI enabled on this HF. Please follow the below path to create the scripted input. Settings -> Data Inputs -> Scripts -> New Local Script Fill the required parameters and save the input. -- Hope this helps
... View more
@rahul2gupta This is not related to disk space. You need to increase the concurrent searches limit on the role/user level. By default the user has a limit 3 standard searches. -- Hope this helps.
... View more
@dm1 If the csv file generated from the search has duplicate rows, indexer will index them as is. You need to remove duplicates in your search. Please try this out. index=abc |fields _time _raw |fields - _indextime _sourcetype _subsecond | dedup field_names* |outputcsv abc_dns -- Hope this helps
... View more
@redsox07928 Please refer Splunk Add-On for Microsoft Windows. It has many inputs. You can deploy this add-on as is OR you can tweak the input parameters per the use case. https://splunkbase.splunk.com/app/742/ Sample perfmon stanza for CPU [perfmon://CPU]
counters = % Processor Time; % User Time; % Privileged Time; Interrupts/sec; % DPC Time; % Interrupt Time; DPCs Queued/sec; DPC Rate; % Idle Time; % C1 Time; % C2 Time; % C3 Time; C1 Transitions/sec; C2 Transitions/sec; C3 Transitions/sec
disabled = 0
instances = *
interval = 10
mode = multikv
object = Processor
useEnglishOnly=true index = index_name -- Hope this helps
... View more
@Sathya0Q I guess you have the list of users in a lookup file. If not please create one with field name as user. Use the below search to get the login attempts from the selected users. If the field name in the lookup is not user, you should use a rename command in the subsearch index=_audit [ | inputlookup users.csv ] login attempt -- Hope this helps
... View more
@mah Does the table panel is also built on this base search? Its possible to pass the value from the table panel to the base search. you should create tokens based on requirements. In your case, you can use $click.value2$ to pick the value from table. To manage tokens, Edit Dashboard -> More Actions (3 vertical dots on the top right of the panel) -> Edit Drilldown -> select 'Manage tokens on this dashboard' from dropdown -- Hope this helps
... View more
@jason_hotchkiss parentheses is missing. The below should work | eval MemoryUtilization = round(((memTotalMB - memFreeMB) / memTotalMB * 100),2) -- Hope this helps
... View more
@SamHTexas Splunk has a dashboard "Orphaned Scheduled Searches, Reports, and Alerts" to find the orphaned knowledge objects. This dashboard is in search app. Users can remove/reassign the orphaned knowledge objects from below. Settings -> All Configurations -> Reassign Knowledge Objects -> Orphaned. -- Hope this helps.
... View more
@klischatb The peers will be added to search head cluster by default when you integrate it with indexer cluster (from cluster master). If you no longer have this peer (server 2), you need to remove it from the indexer cluster and then the cluster master. -- Hope this helps
... View more
@altink strftime converts UNIX time to regualr readable time. From the SPL, the min_time & max_time are already converted in line 2 of the code. Simply you can remove line 2 OR you can add the following stanzas | eval min_time = strftime(strptime(min_time,"%m/%d/%Y %H:%M:%S"),"%Y-%m-%d"))
| eval max_time = strftime(strptime(max_time,"%m/%d/%Y %H:%M:%S"),"%Y-%m-%d")) -- Hope this helps
... View more
@krvamsireddy From the logs it is clear that the cert was expired and you need to generate a new server cert. Check the validity of the cert using below command. This should give you the end data of the cert openssl x509 -noout -enddate -in /opt/splunk/etc/auth/server.pem To generate a new ssl cert /opt/splunk/bin/splunk createssl server-cert 3072 -d /opt/splunk/etc/auth -n server -c <FQDN> Restart splunk check kvstore status /opt/splunk/bin/splunk show kvstore-status -- Hope this Helps
... View more
@rsilwal7 You can use splunk add-on for aws to send data from aws s3 to splunk. you should use SQS based S3 approach. If the data volume is high, you can use this route s3 -> kinesis firehose -> Splunk (using HEC) https://docs.splunk.com/Documentation/AddOns/released/AWS/SQS-basedS3 hope this helps.
... View more