All Topics

Top

All Topics

Hello Experts, I am looking at an alert that is using a join function to match a work_center with a work order. I am wondering what records in a stream of records the join is looking at to get that ... See more...
Hello Experts, I am looking at an alert that is using a join function to match a work_center with a work order. I am wondering what records in a stream of records the join is looking at to get that result? Is there a way to get the latest result.  To explain further, the work center in some cases will change based on where work is being completed, so I would like to grab the latest result when the alert runs.  The current code I am looking at using this give us a way to compare the work center in the source="punch" vs the current stream of data. I am wondering if I can further manipulate that subsearch to look at the last result in source="punch". I tried a couple things but didn't have any luck. Not super familiar with joins in my normal work.  | join cwo type left [search source=punch | rename work_center as position]
Hi All, I have a many index and sourcetypes but i don't know which one i have to use to search for specific ip address traffic with port.. please guide me like how can i identify and use the  e... See more...
Hi All, I have a many index and sourcetypes but i don't know which one i have to use to search for specific ip address traffic with port.. please guide me like how can i identify and use the  existing index and sourcetypes to  analyze  particular traffic.....
We have standalone environment and are getting error "the percentage of non-high priority searches skipped (61%) over the last 24 hours is very high and exceeded the red threshold (20%) on this splun... See more...
We have standalone environment and are getting error "the percentage of non-high priority searches skipped (61%) over the last 24 hours is very high and exceeded the red threshold (20%) on this splunk instance."  The environment: Customer has standalone where we created an app with a savedsearch script that pulls all indexed events every 1 hour and bundles them into a  .json file, customer then compresses it into a .gz file for transfer into our production environment.   What we are seeing is this skipped searches message and when we check the specific job, we see that every time it runs there are 2 things that come up as jobs, the export app started by python calling the script and then the actual search job activity with our SPL search, both jobs are 1 second apart and stays in the jobs page for 10 minutes each, customer states that it takes ~2.5 minutes for this job to complete.   The python script seems to stay longer for some reason, even after its job  Not sure how to proceed, since we had it scheduled every 4 hours and it was doing the same thing, so we lowered it to 1 hour, no difference. Our search looks at the last completed .json file epoch time and current epoch time to grab those events in that range, so not sure if that message is like a false positive by the way we are catching events (timestamps).  How can i remove the skipped searches error message.  Tips??      
Hello, I have a search as shown below which gives me the start time (start_run), end time (end_run) and duration when the value of (ValueE) is greater than 20 for the Instrument (my_inst_226). I ... See more...
Hello, I have a search as shown below which gives me the start time (start_run), end time (end_run) and duration when the value of (ValueE) is greater than 20 for the Instrument (my_inst_226). I need to get the values (ValueE) from 11 other Instrument for the duration of my_inst_226 while ValueE is greater than 20 I would like to use "start_run" and "end_run"  to find the value of (ValueE).  I'm thinking that "start_run" and "end_run" would be variables that I can use when searching the ValueE for my 11 other Instruments but I am stuck on how I can use "start_run" and "end_run" for the next stage of my search.   index=my_index_plant sourcetype=my_sourcetype_plant Instrument="my_inst_226" | sort 0 Instrument _time | streamstats global=false window=1 current=false last(ValueE) as previous by Instrument | eval current_over=if(ValueE > 20, 1, 0) | eval previous_over=if(previous > 20, 1, 0) | eval start=if(current_over=1 and previous_over=0,1,0) | eval end=if(current_over=0 and previous_over=1,1,0) | where start=1 OR end=1 | eval start_run=if(start=1, _time, null()) | eval end_run=if(end=1, _time, null()) | filldown start_run end_run | eval run_duration=end_run-start_run | eval check=_time | where end=1 | streamstats count as run_id | eval earliest=strftime(start_run, "%F %T") | eval latest=strftime(end_run, "%F %T") | eval run_duration=tostring(run_duration, "duration") | table run_id earliest latest start_run end_run run_duration current_over previous_over end Instrument ValueE   Any and all tips, help and advice will be gratefully received.
I am getting a 500 internal server error when I try to connect to the HF GUI. I ran firewall-cmd --list-ports, and it shows 8000/tcp. I also checked web.conf, and it shows enableSplunkWebSSL = 1, as ... See more...
I am getting a 500 internal server error when I try to connect to the HF GUI. I ran firewall-cmd --list-ports, and it shows 8000/tcp. I also checked web.conf, and it shows enableSplunkWebSSL = 1, as well as httport = 8000. What else can I check? I appreciate the help in advance!
Hi Team, i am continously getting  below 2 errors after i did restart.  these error i am getting on indexers cluster ERROR SearchProcessRunner [531293 PreforkedSearchesManager-0] - preforked proce... See more...
Hi Team, i am continously getting  below 2 errors after i did restart.  these error i am getting on indexers cluster ERROR SearchProcessRunner [531293 PreforkedSearchesManager-0] - preforked process=0/33361 hung up WARN HttpListener [530927 HTTPDispatch] - Socket error from <search head IP address>:50094 while accessing /services/streams/search: Broken pipe   please help to resolve these error
See how you can detect memory leaks with Automatic Leak Detection, and how you may be able to resolve the issue before a restart is required.   CONTENTS | Introduction | Video |Resources | About ... See more...
See how you can detect memory leaks with Automatic Leak Detection, and how you may be able to resolve the issue before a restart is required.   CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 2 min 24 seconds  When Automatic Leak Detection is enabled, Ops teams can identify and diagnose memory leaks within the JVM and coordinate with their Dev teams to resolve the code-based causes they uncover.       Additional Resources  Learn more about Automatic Leak Detection for Java in the documentation.  About presenter Tori Beaird (Forbess)  Tori Beaird (Forbess)Tori Beaird (Forbess) joined AppDynamics as a Sales Engineer in 2020. With an Industrial Distribution Engineering degree and a decade of musical theatre training - sales engineering offered the best of both worlds. Although she is a Texas native, she helps customers up and down the West Coast improve their application monitoring practices.   Within AppDynamics, Tori is a part of the AppDynamics Cloud Native Application Observability Champions team, enabling peers and customers alike on AppDynamics’ latest monitoring tool. With a passion for teaching others, Tori continues to develop and present internal training sessions to the broader Cisco organization.   When Tori isn't at work, you will find her flying Cessna 182s, volunteering with her church, and spending time with her beloved husband and friends! 
Hello, i'm trying to use the global account variables: username > ${global_account.username} as tenantID password > ${global_account.password} as token ID to build dynamically the REST URL, but s... See more...
Hello, i'm trying to use the global account variables: username > ${global_account.username} as tenantID password > ${global_account.password} as token ID to build dynamically the REST URL, but seems that the global variables content is not filled 2023-09-13 14:51:12,726 - test_REST_API - [ERROR] - [test] HTTPError reason=HTTP Error Invalid URL '{{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf': No scheme supplied. Perhaps you meant http://{{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf? when sending request to url={{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf method=GET Traceback (most recent call last): The ${global_account.username} has been tested with and without the prefix https:// Please anyone can help me ?  
Hello, So I am trying to build a report that alerts us when a support ticket is about to hit 24hrs, The filed we are using is custom time field called REPORTED_DATE and it displays the time in the ... See more...
Hello, So I am trying to build a report that alerts us when a support ticket is about to hit 24hrs, The filed we are using is custom time field called REPORTED_DATE and it displays the time in the way  2023-09-11 08:44:03.0 I need a report That tells us when tickets are within 12hrs or less of crossing the 24 hour mark.    This is our code so far    ((index="wss_desktop_os") (sourcetype="support_remedy")) earliest=-1d@d | search ASSIGNED_GROUP="DESKTOP_SUPPORT" AND STATUS_TXT IN ("ASSIGNED", "IN PROGRESS", "PENDING") | eval TEST = REPORTED_DATE | eval REPORTED_DATE2=strptime(TEST, "%Y-%m-%d") | eval MTTRSET = round((now() - REPORTED_DATE2) /3600) ```| eval MTTR = strptime(MTTRSET, "%Hh, %M")``` | dedup ENTRY_ID | stats LAST(REPORTED_DATE) AS Reported, values(ASSIGNEE) AS Assignee, values(STATUS_TXT) as Status,values(MTTRSET) as MTTR by ENTRY_ID   Any help would be appreciated. I will admit I struggle with time calucations
Our Splunk environment is chronically under resourced, so we see a lot of this message: [umechujf,umechujs] Configuration initialization for D:\Splunk\etc took longer than expected (10797ms) when d... See more...
Our Splunk environment is chronically under resourced, so we see a lot of this message: [umechujf,umechujs] Configuration initialization for D:\Splunk\etc took longer than expected (10797ms) when dispatching a search with search ID _MTI4NDg3MjQ0MDExNzAwNUBtaWw_MTI4NDg3MjQ0MDExNzAwNUBtaWw__t2monitor__ErrorCount_1694617402.9293. This usually indicates problems with underlying storage performance. It is our understanding that the core issue here is not so much storage, but processor availability.  Basically Splunk had to wait 10.7 seconds for the specified pool of processors to be available before it could run the search.  We are running a single SH and single IDX.  Both are configured for 10 CPU cores.  Also, this is a VM environment, so those are shared resources.  I know, basically all of the things Splunk advises against (did I mention also running Windows?).  No, we can't address the overall resource situation right now. Somewhere the idea came up that reducing the quantity of cores might help improve processor availability, so if Splunk were only waiting for 4 or 8 cores, it would at least get to the point of beginning the search with less initial delay as it would have to wait for a smaller pool of cores to be available first. So our question is, which server is most responsible for the delay, the SH or the IDX?  Which would be the better candidate for reducing the number of available cores?  
Hi All, i didn't get the result by using this below  query search.  how to check and confirm the index and source type specifically to precise the query index=*| search src=**.**.***.** OR **.... See more...
Hi All, i didn't get the result by using this below  query search.  how to check and confirm the index and source type specifically to precise the query index=*| search src=**.**.***.** OR **.**.***.** dest_ip=**.***.***.*** dest_port=443 How to confirm the source type and index
I am trying to restrict access to a kv store lookup in Splunk. when I set the read/write permissions only for users assigned to test_role role, it should not be accessible by any user outside that r... See more...
I am trying to restrict access to a kv store lookup in Splunk. when I set the read/write permissions only for users assigned to test_role role, it should not be accessible by any user outside that role but it isn't working as expected with a kv store lookup. Can anybody suggest how to achieve this?
Hi, I want to create a splunk table using multiple fields. Let me explain the scenario I have the following fields Name Role (multiple roles will exist for each name) HTTPrequest (There are mul... See more...
Hi, I want to create a splunk table using multiple fields. Let me explain the scenario I have the following fields Name Role (multiple roles will exist for each name) HTTPrequest (There are multiple response as 2**,3**,4** and 5**) My final output  should be when the query is ran, It should the group the data in the below format for every day Date Name Role Success Failed  Total Failed % 01-Jan-23 Rambo Team lead 100 0 100 0 01-Jan-23 Rambo Manager 100 10 110 10 01-Jan-23 King operator 2000 100 2100 5 02-Jan-23 King Manager 100 0 100 0 03-Jan-23 cheesy Manager 100 10 110 10 04-Jan-23 cheesy Team lead 4000 600 4600 15     So, What I tried is  index=ABCD | bucket _time span=1d | eval status=case(HTTPrequest < 400,"Success",HTTPrequest > 399,"Failed" ) | stats count by _time Name Role status This works something as below but I need the success and failure  in to 2 seperate columns as I have shown above and also I need to add the failed % and total Date Name Role HTTPStatus COUNT 01-Jan-23 Rambo Team lead Success 100 01-Jan-23 Rambo Team lead Failed 0 01-Jan-23 Rambo Manager Success 100 01-Jan-23 Rambo Manager Failed 10 01-Jan-23 King operator Success 2000 01-Jan-23 King operator Failed 200 02-Jan-23 King Manager Success 10 03-Jan-23 cheesy Manager Success 300 04-Jan-23 cheesy Team lead Success 400   I used the chart count over X by Y but this allows me to use only 2 fields and not more than 2 Please could you suggest me on how to get this sorted. 
Hello,  I have a couple splunk columns that looks as follows: server:incident:incident#:severity severity   this object is then fed to another system which separates and generat... See more...
Hello,  I have a couple splunk columns that looks as follows: server:incident:incident#:severity severity   this object is then fed to another system which separates and generates incidents. Server: hostname incident: category of incident incident#: the incident number sererity: Critical/Warning/Clear Example: serverA:zabbix:123456:Warning Warning serverA:zabbix:123456:Critical Critical    The objective is that it generates uniqueness of the incident (if warning, then create a ticket, if Critical then call out) All works well when with the separate of Critical and Warning alerts, however when one clear is generated, I need to generate two records to look as follows: serverA:zabbix:123456:Warning Clear serverA:zabbix:123456:Critical Clear    This way, the object that has been sent will get the clear. Is there a way to achieve this? Thanks David
The prerequisites indicate that the Splunk DB Connect extension will not work with systems that are FIPS compliant.  Will this change in future releases and is there a timeframe for this release? 
Hi Everyone, I want to plot a chart according to the calendar week. I plotted a timechart like this,   |timechart span=7d distinct_count(Task_num) as Tasks by STATUS   But this doesn't give the ... See more...
Hi Everyone, I want to plot a chart according to the calendar week. I plotted a timechart like this,   |timechart span=7d distinct_count(Task_num) as Tasks by STATUS   But this doesn't give the exact calendar weeks. Also i am keeping this charts data to last 3 months. Anyone have idea how to plot a bar chart based on calendar week? instead of date i want to see the data for current calendar weeks of last 3 months. I got from the splunk community on how to get the Calendar week. But i am not to plot a graph out of it. | eval weeknum=strftime(strptime(yourdatefield,"%d-%m-%Y"),"%V")    
Hello I'm trying to create a timechart which will compare between two date\time range I want to see the values of last sunday (10.9) between 15:00-16:30 and compare with the values for the same tim... See more...
Hello I'm trying to create a timechart which will compare between two date\time range I want to see the values of last sunday (10.9) between 15:00-16:30 and compare with the values for the same time but sunday last week (3.9) How can I do it ? Thanks
Hi Team, I am looking for the help to created search query for my daily run report which is running 3 time in a day. we are putting the files in directory which we are monitoring in splunk. is ... See more...
Hi Team, I am looking for the help to created search query for my daily run report which is running 3 time in a day. we are putting the files in directory which we are monitoring in splunk. is there any way we can grab events from only latest sourcefile? For example:  Index=abc sourcetype=xyz source=/opt/app/file1_09092023.csv source=/opt/app/file2_09102023.csv source=/opt/app/file3_09112023.csv..... new file can be placed time to time. I wanted report can be show only events from latest file, is it possible? Thank you  
Hi Splunkers, I have to forward data inside csv files from an on prem HF to Splunk Cloud and I'm facing some issues, cause data seem to not be forwarded. Let me share with you some additional bits. ... See more...
Hi Splunkers, I have to forward data inside csv files from an on prem HF to Splunk Cloud and I'm facing some issues, cause data seem to not be forwarded. Let me share with you some additional bits. Info about data Source data are on a cloud instance (Forcepoint) provided by vendor A script has been provided by vendor to pull data from cloud The script is installed and configured on our Splunk HF Data are saved locally on HF Data are in .csv files  Info about HF configuration We create a new data inputs under Settings -> Data inputs -> Local inputs -> Files & Directories We set as data input the path were .csv are saved after script execution We set the proper sourcetype and index Of course, we configured the HF to send data to Splunk Cloud. We downloaded the file from cloud, from "Universal Forwarder" app and installed it as app on HF: the outputs.conf is proper configured, other data are sent without problem to Splunk cloud (for example, Network input ones goes to Cloud without issues; same for Windows ones) Info about sourcetype and index and their deployment We create a custom addon that simply provide the sourcetype "forcepoint" Sourcetype is configured to extract data from CSV; that means that we set parameter      Indexed_extractions=csv ​     We installed addon on both HF and Splunk Cloud The index, called simply "web", has been created on both HF and Splunk Cloud By thw way, seems that data are not sent from HF to Cloud. So, did I forgot some steps? Or I made wrong some of above ones?  
Hi, Too many hours to solve such a simple question...It is supposed to be a basic thing I want to present both percentages and regular values in bar chart (it can be in the tooltip, like it exi... See more...
Hi, Too many hours to solve such a simple question...It is supposed to be a basic thing I want to present both percentages and regular values in bar chart (it can be in the tooltip, like it exists in a pie chart), If not possible to present only percentages but add the "%" symbol (when I tried to add % it converted the fields to string and nothing was shown in the chart) * I can't add a js script, no access to the server This is my query: | stats sum(CountEvents) by CT | rename "sum(CountEvents)" as "countE" | eventstats sum(countE) as Total | eval perc=round(countE*100/Total,2) | chart sum(perc) as "EventsPercentages[%]" over CT thanks a lot