All Topics

Top

All Topics

Our ITSI is showing some "Detected Anomaly" for the kpi "Index Usage". Where and how can I find the notable events for those "Detected Anomaly"?
Hello,I am trying to connect App to get data in Splunk using REST API. The issue is that REST API request need to be implemented in 2 steps. Send POST request to get the token (valid for 24 hrs) Se... See more...
Hello,I am trying to connect App to get data in Splunk using REST API. The issue is that REST API request need to be implemented in 2 steps. Send POST request to get the token (valid for 24 hrs) Send GET request to fetch the results using token from first request. I am able to implement two separate requests. But looking to automate this process. My idea is to write a script which will periodically copy the token values to input configuration file. FYI:I have used Rest API Modular input app for these request. Problems: Splunk Server is on windows. Using Rest API Modular Input App, It sending token results directly to Splunk server not sure how to get that on windows machine. If its linux I would have write a bash script which use curl to fetch token value and paste it to input configuration. Not sure how to perform this on windows. Thanks
Hello All:  I have problems with my application where I am configuring the following stanza in the inputs.config file (C:\Program Files\SplunkUniversalForwarder\etc\apps\ope_web_api\default):  http... See more...
Hello All:  I have problems with my application where I am configuring the following stanza in the inputs.config file (C:\Program Files\SplunkUniversalForwarder\etc\apps\ope_web_api\default):  https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Specifyinputpathswithwildcards [monitor://D:\package\...\version\Admin\APIS\log] disabled = 0 index = ope_web_api sourcetype = api_logs ignoreOlderThan = 1d I have Splunk Cloud 8.2.2 version.  If I change the ... by the name of the file it works fine:  [monitor://D:\package\ABC\version\Admin\APIS\log] disabled = 0 index = ope_web_api sourcetype = api_logs ignoreOlderThan = 1d But I need the ... because not always is the same file name.  Does somebody know what I am missing here?  Regards.   
Hi@LukeMurphey , I am hoping you can help with your File Meta-Data addon that I am hoping is just what I need. I have been trying to get it working for a while now but it just didn't seem to do anyt... See more...
Hi@LukeMurphey , I am hoping you can help with your File Meta-Data addon that I am hoping is just what I need. I have been trying to get it working for a while now but it just didn't seem to do anything after setting up my Data Input. I've done a bit more digging today in the 'file_meta_data_modular_input.log' log file and I am getting the following error reported: 2021-09-27 15:24:06,745 ERROR Execution failed Traceback (most recent call last): File "D:\Program Files\Splunk\etc\apps\file_meta_data\bin\modular_input.zip\modular_input\modular_input_base_class.py", line 1095, in execute self.do_run(in_stream, log_exception_and_continue=True) File "D:\Program Files\Splunk\etc\apps\file_meta_data\bin\modular_input.zip\modular_input\modular_input_base_class.py", line 976, in do_run self.run(stanza, cleaned_params, input_config) File "D:\Program Files\Splunk\etc\apps\file_meta_data\bin\file_meta_data.py", line 641, in run file_filter=file_filter) File "D:\Program Files\Splunk\etc\apps\file_meta_data\bin\file_meta_data.py", line 187, in get_files_data file_path = file_path.encode("utf-8") AttributeError: 'bytes' object has no attribute 'encode' As the error is caused in the bold line above in the .py file, I guess this may be something to do with my Splunk Enterprise (Free version) running on a Windows 10 machine, trying to index Windows volumes. FYI in my Data Input, it doesn't matter if I have the "File or directory path" as "L:\" or "L:" or anything else (i.e. I want to index my L:\ volume), it just reports this error. Sorry this is probably a noob question but any help would be appreciated. FYI I am using v1.4.5 of your add-on. Many thanks in advance.
Hi, Need help with regex for LINE_BREAKER attribute in props.conf. I have the below data and wanted it as a single event in Splunk. Currently, <RESULTS> data splits into multiple events. I would l... See more...
Hi, Need help with regex for LINE_BREAKER attribute in props.conf. I have the below data and wanted it as a single event in Splunk. Currently, <RESULTS> data splits into multiple events. I would like to send the entire <DETECTION> tag as a single event. Can someone help me provide the right LINE_BREAKER pattern to be used?   <DETECTION> <ID>231</ID> <TYPE>Information</TYPE> <SEVERITY>1</SEVERITY> <RESULTS>Line 1 : field 1 : value1 field 2: value2</RESULTS> <STATUS>NEW</STATUS> </DETECTION>  
Hi, my company is deprecating basic authentication. We use the Microsoft Office 365 Reporting Add-on for Splunk which relies on basic authentication. Are there any plans on a future version for this... See more...
Hi, my company is deprecating basic authentication. We use the Microsoft Office 365 Reporting Add-on for Splunk which relies on basic authentication. Are there any plans on a future version for this app? Or are there other apps that can be used ( using other apis to get the same data)? This is the app we use: https://splunkbase.splunk.com/app/3720/#/details I appreciate feedback, hints on other options, Chris
XML parsing is not working as expected, field values are truncating , tried changing truncate values in props but that doesnt help.. Tried with and without KV_MODE as well props.conf on HF [iiq_db... See more...
XML parsing is not working as expected, field values are truncating , tried changing truncate values in props but that doesnt help.. Tried with and without KV_MODE as well props.conf on HF [iiq_db] TRUNCATE = 100000 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = custom pulldown_type = true KV_MODE = xml sample log   2021-09-24 14:28:29.011, id="XXXXXXX", created_dt="2021-09-24 08:18:01.87", created="1632489481870", source="RequestHandler", action="provision", target="XXXX", application="Enterprise Directory", account_name="XXXX,ou=XXXXX,ou=XXXX,ou=HO,o=XXX.com", attributes="<Attributes> <Map> <entry key="IIQDisabled"> <value> <Boolean></Boolean> </value> </entry> <entry key="accountFlag" value="ACTIVE"/> <entry key="cn" value="XXXXX"/> <entry key="dn" value="uid=XXXX,ou=XXXX,ou=XXXX,ou=HO,o=XXX.com"/> <entry key="email" value="XX@XX.com"/> <entry key="employeeNumber" value="XXXXX"/> <entry key="employeeType" value="E"/> <entry key="givenname" value="XXXXX"/> <entry key="globaluid" value="XXX"/> <entry key="mail" value="(XX0:XX1:XX3:XX0:XX0:XX0:XX0:XX0:XX0:XX0)XX@XX.com"/> <entry key="mailAccessDomain" value="HO XXX"/> <entry key="mailRoutingAddress" value="(XX0)EX"/> <entry key="mailalternateaddress"> <value> <List> <String>(XX0:XX11)S=XXXX/G=XXX/OU=XXXX.com</String> </List> </value> </entry> <entry key="XXUniqueId" value="XYZ"/> <entry key="XXaccountstatus" value="0"/> <entry key="XXlcsp1" value="XYZ"/> <entry key="XXlinteractivep1" value="XYZ"/> <entry key="XXllcp1" value="XXXXYZ"/> <entry key="XXlmaildisplayname" value="XY , AB"/> <entry key="XXlmemberof"> <value> <List> <String>cn=Passphrase-Policy-Users,ou=groups,o=XXl.com</String> </List> </value> </entry> <entry key="XXlprofilechecksum" value="XXXX"/> <entry key="XXorgmemberof"> <value> <List> <String>ou=XXX,ou=XXXX,ou=HO,o=XXl.com</String> </List> </value> </entry> <entry key="XXworkgroupmanager" value="XXXXXXX"/> <entry key="nsAccountLock" value="FALSE"/> <entry key="objectClass"> <value> <List> <String>top</String> <String>person</String> <String>organizationalperson</String> <String>inetorgperson</String> <String>mailrecipient</String> <String>universaluniqueid</String> <String>XXlorgperson</String> </List> </value> </entry> <entry key="op"> <value> <ObjectOperation>Modify</ObjectOperation> </value> </entry> <entry key="sn" value="XXXX"/> <entry key="uid" value="XXXX"/> <entry key="uidns" value="XXXX"/> <entry key="uuid" value="XXXXX"/> </Map> </Attributes> ", string1="Enterprise Directory", string2="committed"    
Hello! I have been trying to make a base search on a dashboard with a time and environment input as a drop-down. It only search it once and don't actually change my search ones i change my input. I... See more...
Hello! I have been trying to make a base search on a dashboard with a time and environment input as a drop-down. It only search it once and don't actually change my search ones i change my input. Is there something I'm missing? This is my form inputs:   <form> <fieldset submitButton="false" autoRun="false"> <input type="dropdown" token="env"> <label>Environment</label> <choice value="TEST">TEST</choice> <choice value="DEV">DEV</choice> <choice value="PRD">PRD</choice> </input> <input type="time" token="time"> <label>Time</label> <default> <earliest>@d</earliest> <latest>now</latest> </default> </input> </fieldset>     This is my basesearch and 2 diffrent pie charts:   <search id="base_search"> <query>index=Lorem logtype=ipsum enviroment=$env$ | stats count BY status</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <row> <panel> <chart> <search base="base_search"> <query> search statuscode<400 <query> </search> <option name="charting.chart">pie</option> </chart> </panel> </row> <row> <panel> <chart> <search base="base_search"> <query> search statuscode>400 <query> </search> <option name="charting.chart">pie</option> </chart> </panel> </row>    
i all I'm tasked with performing an audit of our Splunk (Cloud) Search Heads (2) as many Apps \ Add-Ons have been sporadically installed onto them over the years and problems are occurring. The aim... See more...
i all I'm tasked with performing an audit of our Splunk (Cloud) Search Heads (2) as many Apps \ Add-Ons have been sporadically installed onto them over the years and problems are occurring. The aim is to export the search to .CSV to compare, detect gaps, mismatches etc., identify candidates for upgrade or removal etc. Any offers to help greatly appreciated.        
We are recieving following error in the logs after UF upgrade to 8.2.1, can someone please confirm if any action needs to be taken or if this error can be ignored? ERROR DistributedTracer [389066 M... See more...
We are recieving following error in the logs after UF upgrade to 8.2.1, can someone please confirm if any action needs to be taken or if this error can be ignored? ERROR DistributedTracer [389066 MainThread] - Couldn't find "distributed_tracer" in server.conf. ./splunk --version Splunk Universal Forwarder 8.2.1 (build ddff1c41e5cf) UF has been started and running correctly.
Hello dears, I want to list my search if  "B" total count higher than >3 than list by "A" A and B fields could have variable values, doesn't matter.  search | stats count(B) by A,B |sort -A |where... See more...
Hello dears, I want to list my search if  "B" total count higher than >3 than list by "A" A and B fields could have variable values, doesn't matter.  search | stats count(B) by A,B |sort -A |where B>3
So we have a task to find all the hosts in our splunk enterprise. We need to take the list and what type of logs we are getting from that hosts. How can we do that easily?
Hi, We are trying to upgrade Splunk enterprise from 7.3.1 to 8.1.5 . What will be the first activity like 1. Which major files we need to take backup? 2.We need to upgrade the less impacted item f... See more...
Hi, We are trying to upgrade Splunk enterprise from 7.3.1 to 8.1.5 . What will be the first activity like 1. Which major files we need to take backup? 2.We need to upgrade the less impacted item first? 3. Search head,  monitoring console, indexers, deployment server what need to be updated first 4. Can we stop all indexers at a time during upgradation that cause any impact? 5. How about the forwarders upgradation?
I would like to ask about the line of code we put in the messages field in the Splunk Alert Action for Slack Notification.    $result.users$$result.message$   Here is a screenshot of the send Mes... See more...
I would like to ask about the line of code we put in the messages field in the Splunk Alert Action for Slack Notification.    $result.users$$result.message$   Here is a screenshot of the send Message plugin details that we set in a test channel.   I would like to ask why, beginning last week - all of a sudden it began displaying this in Slack:      Instead of the usual results we have that would indicate  @yoshilog "Good day.. <Blah, blah>". So what we did is update the code, to add a whitespace in between the two result calls.     $result.users$ $result.message$     Doing so, fixed the results, and led to the expected output in our Slack test channel. @yoshilog "Good day.. <Blah, blah>". However, within the team, there were some questions about what had changed in the past week, that suddenly caused the alert to not post the expected output in slack. (Since no one had changed / touched the alert for a long time).  I have also gotten in touch with the plugin developer, however he has not responded so I resorted to posting here, since some Splunkers might have had some experience with the issue.    Would appreciate your ideas re: what had happened. Thank you in advance!
I have learned this is very important in making sure you can recoverin case of a big disaster. It is a saving net for your saved searches, event types, tags, lookups, reports & all your customization... See more...
I have learned this is very important in making sure you can recoverin case of a big disaster. It is a saving net for your saved searches, event types, tags, lookups, reports & all your customizations. I work in a large environment including splunk Ent. & ES. Any planning / SPLs are much appreciated. Thx a ton !
I know this is a niche and rookie question, but maybe someone out there can provide some guidance. I'm quite new to Splunk. I have practiced inputting data and working with it in Fundamentals 1, but ... See more...
I know this is a niche and rookie question, but maybe someone out there can provide some guidance. I'm quite new to Splunk. I have practiced inputting data and working with it in Fundamentals 1, but I believe inputting other types of data and working with it will be good in helping me learn. I'm enjoying learning Spunk, but I lack a lot of experience in data analytics. I don't know where to start looking for good practice data. I don't expect many people to have practice data readily available, even so, thank you for hearing me out.
Am getting an error by the cluster master under messages. Indexes missing. Need to learn how many are missing and what happened to them? If deleted by whom? Thanks a million in advance
Hi We are using splunk version 8.1.0 in cluster mode , in my environment we have this components: Nginx load load balancer : for load balancing request to search heads 3 search heads and 1 deploye... See more...
Hi We are using splunk version 8.1.0 in cluster mode , in my environment we have this components: Nginx load load balancer : for load balancing request to search heads 3 search heads and 1 deployer: in cluster mode 3 indexer and 1 master node: in cluster mode 2 heavy forwarder : stand alone and forward data with load balancing between indexers 1 syslog server : receive syslogs from 100 servers and send it via ipvsadm(port 514 udp) to heavy forwarders All splunk servers is centos 7 and all servers are same network zone And i have almost 300 GB per day data server specifications: Search heads : 32GB Ram 32Core Cpu Indexer : 32GB Ram 16Core Cpu heavy forwarder : 12GB Ram 12Core Cpu syslog server: 12GB Ram 12Core Cpu We have a problem in real time search , we have a lot of dashboards with multiple searches in there , when i open my dashboards after random time (about 1 to 120 seconds) we get a error. here is description of my error : [<indexer hostname>] Timed out waiting for peer <indexer hostname>:ingest_pipe=1. Search results might be incomplete! If this occurs frequently, receiveTimeout in distsearch.conf might need to be increased we dont have any problem in resources such as cpu utilization and lack of memory too This error happened while we have another instance with one indexer and one search head in non cluster environment with same traffic, and we dont have any problem with that , the old version of splunk is 6.6.1 So what did i do: - Increase receiveTimeout parameter in search heads but i know problem is not this - Increase parallelIngestionPipelines in indexers to 2 , - Tune os recommended by splunk site - Increase max_searches_per_cpu to 15 - and ... But problem not solved
Hi , I have 2 queries : index="bar_*" sourcetype =foo crm="ser" | dedup uid | stats count as TotalCount and  index="bar_*" sourcetype =foo crm="ser" jet="fas" | dedup uid | stats count as Tota... See more...
Hi , I have 2 queries : index="bar_*" sourcetype =foo crm="ser" | dedup uid | stats count as TotalCount and  index="bar_*" sourcetype =foo crm="ser" jet="fas" | dedup uid | stats count as TotalFalseCount I need both of these queries merged and then take "TotalCount" and "TotalFalseCount" and get value from these as : ActualPercent= (TotalFalseCount/TotalCount)*100. I created one query as below: index="bar_*" sourcetype =foo crm="ser" | dedup uid | stats count as TotalCount by zerocode SubType | appendcols                 [searchindex="bar_*" sourcetype =foo crm="ser" jet="fas"                      | dedup uid                           | stats count as TotalFalseCount by zerocode SubType]  | eval Percent=(TotalFalseCount/TotalCount)*100    | stats count by zerocode SubType Percent   but the value of "Percent" is completely wrong, can anybody help to know how can I get proper value of "Percent" in above case ?
i have this spl  | tstats `summariesonly` earliest(_time) as _time from datamodel=Incident_Management.Notable_Events_Meta by source,Notable_Events_Meta.rule_id | `drop_dm_object_name("Notable_Events... See more...
i have this spl  | tstats `summariesonly` earliest(_time) as _time from datamodel=Incident_Management.Notable_Events_Meta by source,Notable_Events_Meta.rule_id | `drop_dm_object_name("Notable_Events_Meta")` | `get_correlations` | join rule_id [| from inputlookup:incident_review_lookup | eval _time=time | stats earliest(_time) as review_time by rule_id] | eval ttt=review_time-_time | stats count,avg(ttt) as avg_ttt,max(ttt) as max_ttt by rule_name | sort - avg_ttt | `uptime2string(avg_ttt, avg_ttt)` | `uptime2string(max_ttt, max_ttt)` | rename *_ttt* as *(time_to_triage)* | fields - *_dec it should display the mean time to triage for 14 days but it doesn't work for 14 days and works for 30 days. any advise ?