All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

2022-02-23 11:02:55 Cust=A1,txn_num=12,txn_status=success 2022-02-23 11:02:55 Cust=A1,txn_num=7,txn_status=failed 2022-02-23 11:02:55 Cust=A1,txn_num=18,txn_status=awaiting 2022-02-23 11:04:55 Cus... See more...
2022-02-23 11:02:55 Cust=A1,txn_num=12,txn_status=success 2022-02-23 11:02:55 Cust=A1,txn_num=7,txn_status=failed 2022-02-23 11:02:55 Cust=A1,txn_num=18,txn_status=awaiting 2022-02-23 11:04:55 Cust=A2,txn_num=13,txn_status=success 2022-02-23 11:04:55 Cust=A2,txn_num=18,txn_status=failed 2022-02-23 11:04:55 Cust=A2,txn_num=26,txn_status=awaiting 2022-02-23 11:06:55 Cust=A3,txn_num=12,txn_status=success 2022-02-23 11:06:55 Cust=A3,txn_num=7,txn_status=failed 2022-02-23 11:06:55 Cust=A3,txn_num=18,txn_status=awaiting 2022-02-23 11:15:55 Cust=A4,txn_num=13,txn_status=success 2022-02-23 11:15:55 Cust=A4,txn_num=18,txn_status=failed 2022-02-23 11:15:55 Cust=A4,txn_num=26,txn_status=awaiting 2022-02-23 11:25:55 Cust=A5,txn_num=12,txn_status=success 2022-02-23 11:25:55 Cust=A5,txn_num=7,txn_status=failed 2022-02-23 11:25:55 Cust=A5,txn_num=18,txn_status=awaiting 2022-02-23 11:30:55 Cust=A6,txn_num=13,txn_status=success 2022-02-23 11:30:55 Cust=A6,txn_num=18,txn_status=failed 2022-02-23 11:30:55 Cust=A6,txn_num=18,txn_status=awaiting   what im trying to achieve is to create a bar chart for each cust i need the status for the timeframe i would the divide the cust by trellis but for each cust i can't get the bar height based on txn_num and split by status  something like below   
Hi all, I'm using the syndication component (latest version), to fetch data from multiple feeds: https://www.cloudflarestatus.com/history.atom https://cloud.ibm.com/status/api/notifications/feed... See more...
Hi all, I'm using the syndication component (latest version), to fetch data from multiple feeds: https://www.cloudflarestatus.com/history.atom https://cloud.ibm.com/status/api/notifications/feed.rss https://status.aws.amazon.com/rss/all.rss https://status.cloud.google.com/feed.atom https://ocistatus.oraclecloud.com/history.rss By adding the entries, the events have started to repeat every time each feed is processed, which is 5 minutes, that is, it is re-indexing the entire set of events every 5 minutes for each feed. The check is activated so that it only takes into account new events. When I set one feed, for example google feed with 3 events: After 5 min: If I make: index=gcc_extension_1 source = syndication://google_gcc_ext | stats count values(host) values(source) values(sourcetype) values(index) by _raw | WHERE count>0 There are 6 results, note that it is not the entire _raw that is repeated, since the _indextime is different each time the array is processed. I've been researching and doing all kinds of tests for a long time, but I don't know what the problem could be. If anyone could help me out a bit with this I'd really appreciate it. Here, the detail of feed conf: Aside from screenshots, I can provide configuration as needed. Thank you very much in advance.
Hi everyone,  I created a dashboard with the choroplethSVG function. However for my usecase I can not use the upload function for SVG's.  I have to use the link function for SVG's.  I tried to use ... See more...
Hi everyone,  I created a dashboard with the choroplethSVG function. However for my usecase I can not use the upload function for SVG's.  I have to use the link function for SVG's.  I tried to use a link to google drive to insert the SVG data but the result is an error message. "error was`TypeError: Failed to fetch`"   I often use links referring to PNG data in drive and it always works. I realized that a link to PNG leads to opening the PNG. A link referring to SVG in drive results in opening a new tab and downloading the data instead of opening it.   Is there any way to make use of this function or does Splunk not yet support the link function for SVG.  
I am registering an app in Azure AD to use the Microsoft 365 App for Splunk. When I registered the app, I added the Office 365 Management API to the API permissions. However, the permissions that ... See more...
I am registering an app in Azure AD to use the Microsoft 365 App for Splunk. When I registered the app, I added the Office 365 Management API to the API permissions. However, the permissions that are actually displayed are different from the ones displayed in the steps described in the reference URL. Is it possible for Splunk to get logs from Microsoft 365 without "ActivityReports" and "ThreatIntelligence" in the permissions? procedure   Actual screen Reference URL https://www.splunk.com/ja_jp/blog/it/set-up-guide-microsoft365-vol1.html 
Hi, I'm new to Splunk and I was trying to compare values in the same field and group them subsequently. The events had client transaction id, pp_account_number, corrid different so had to remove th... See more...
Hi, I'm new to Splunk and I was trying to compare values in the same field and group them subsequently. The events had client transaction id, pp_account_number, corrid different so had to remove them and compare and group. I used | stats group by and it didn't get me the results.  There were results that looked same but were not grouped together. Below is my query. I went on to remove spaces so that it will group better but didn't work as well .    (index=pp_cal_live_logs_failure_services OR index=pp_cal_live_logs_success_sampling OR index=pp_cal_live_logs_allowlist)(machineColo="*") source IN ("riskexternalgateway") | eval corrId=corr_id | fields "corrId" , "calName" , "calMessage" | where (match(calName,"Monitor_Vendor_Service_Call") AND match(calMessage,"usecase_name=US_CIPACHFunding&VReq[a-z]*")) | eval calMessage= replace(calMessage, " ", "") | eval calMessage = replace(calMessage, "<client_transaction_id>.*</client_transaction_id>" ," ") | eval calMessage = replace(calMessage, "<pp_account_number>.*</pp_account_number>" ," ") | eval calMessage = replace(calMessage, "corr_id_=.*" ,"") | stats by calMessage 
Need to extract json file in fields { "AAA": { "modified_files": [ "\"b/C:\\\\/HEAD\"", "\"b/C:\\\\/dev\"", "\"b/C:\\\\HEAD\"" ] }, "BBB": { "modified_files": [ "\"b/C:\\\\/HEAD\"", "\"... See more...
Need to extract json file in fields { "AAA": { "modified_files": [ "\"b/C:\\\\/HEAD\"", "\"b/C:\\\\/dev\"", "\"b/C:\\\\HEAD\"" ] }, "BBB": { "modified_files": [ "\"b/C:\\\\/HEAD\"", "\"b/C:\\\\/dev\"", "\"b/C:\\\\HEAD\"" ] } } Expected Output as: AAA,BBB is application name eg: Application: AAA Thanks in advance
Hi All, How do we know whether typing queues are blocked or not? Is it from Internal logs? From the backend of the server, is it possible to find the queue blocks?
Hi all, So, I have this URL/API endpoint as http://xml.app.com/pay/ent/auth/service/getId and I want to extract getId for the index that has field name 'end_points' and create table for the same fie... See more...
Hi all, So, I have this URL/API endpoint as http://xml.app.com/pay/ent/auth/service/getId and I want to extract getId for the index that has field name 'end_points' and create table for the same field name that only displays the text 'getId' rather than the entire URL. How to do it using regex in Splunk. Although, I tried something like this:   rex "^http(s)?:\W+\w+\.\w+\.com\W\w+\W\w+\W\w+\W\w+\W(?<end_points>)" | table end_points Since, I started learning Splunk quite a few days ago, I'm new to this. Any help would be appreciated. Thanks.
I am trying to set up the Planck add-on for Microsoft Office365 by referring to the following URL. I'm trying to set up "Service Status" and "Service Message", but they don't appear in the menu. If... See more...
I am trying to set up the Planck add-on for Microsoft Office365 by referring to the following URL. I'm trying to set up "Service Status" and "Service Message", but they don't appear in the menu. If anyone knows the reason, please let me know. Reference URL https://www.splunk.com/ja_jp/blog/it/set-up-guide-microsoft365-vol2.html
As the title says, I have a list of subnets and I would like to create a search which would show traffic (using Palo logs) passing through those subnets. It should still show those subnets that had n... See more...
As the title says, I have a list of subnets and I would like to create a search which would show traffic (using Palo logs) passing through those subnets. It should still show those subnets that had no traffic. I am using below query but it doesn't return results of those subnets with 0 traffic. If anyone can help with a better version of this query with a lookup and possibly using datamodels, that would be great. index=palo sourcetype=pan:log |eval stan=case(cidrmatch("10.0.0.0/24",src),"stanA"),(cidrmatch("10.0.1.0/24",src),"stanB"),(cidrmatch("10.0.2.0/24",src),"stanC") |stats count by stan  
I am using the Splunk Add on for Aws app and using a generic s3 data input. I am unable to get the data into splunk. I get this in the splunk_ta_aws_generic_s3_<data_input_name>.log  phase="fetch... See more...
I am using the Splunk Add on for Aws app and using a generic s3 data input. I am unable to get the data into splunk. I get this in the splunk_ta_aws_generic_s3_<data_input_name>.log  phase="fetch_key" | message="Failed to get object." key="filename.json" Any help would be helpful.
I know this may be backward but do we have the ability to create an alert if data ingest fails so I can know ahead of time 
In my first post, I need to search Splunk using the REST API. How do I get the system to actually return me some results? Steps POST a search, example:      search=search index=myInde... See more...
In my first post, I need to search Splunk using the REST API. How do I get the system to actually return me some results? Steps POST a search, example:      search=search index=myIndex earliest=-1d "[nice-keyword]" AND "Nice catch-phrase" | rex field=_raw "reportingSystem\":\s+\"(?<system>\d{3})[\s\S]+operationCode\":\s+\"(?<opcode>\w+)[\s\S]+ticketId\":\s+\"(?<ticket>\d*)[\s\S]+transactionCode\":\s+\"(?<txcode>\w+)[\s\S]+NumericCode\":\s+\"(?<agency>\d*)" | table system, opcode, txcode, agency​     In the SEARCH User Interface, this makes a nice report Grab the job search ID. Continually GET the job status of the POSTed search until DONE or something else that helps me stop polling. Ask for the job results. Get 200 OK but no content. How does one actually format a search that can provide actual results via the API? Stumped. For days. I'm using Postman before moving on to my favorite middleware tool. Thank you.
Currently, I have a Table that gives me Severity Categories.  Sevcat I Sevcat II Sevcat III 5 10 12   I'm using the following SPL to generate this table:   |eval C... See more...
Currently, I have a Table that gives me Severity Categories.  Sevcat I Sevcat II Sevcat III 5 10 12   I'm using the following SPL to generate this table:   |eval CATI = if(SEVCAT="I", 1,0) |eval CATII = if(SEVCAT="II", 1,0) |eval CATIII = if(SEVCAT="III", 1,0) |stats sum(CATI) as "Sevcat I" sum(CATII) as "Sevcat II" sum(CATIII) as "Sevcat III" |table "Sevcat I" "Sevcat II" "Sevcat III"   Is there some way to convert this table into a piechart.  Any help is appreciated -Marco 
My query is:   Mozilla/5.0 (X11; Linux x86_64; Catchpoint) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36   I want to extract the following word from the above sting... See more...
My query is:   Mozilla/5.0 (X11; Linux x86_64; Catchpoint) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36   I want to extract the following word from the above sting with regex can you please help me.   Chrome/87.0.4280.88  
I spent a fair amount of time perusing Google and Splunk Answers but couldn't seem to find a solution that made sense... essentially the requirement I have is to display a timestamp in a Splunk dashb... See more...
I spent a fair amount of time perusing Google and Splunk Answers but couldn't seem to find a solution that made sense... essentially the requirement I have is to display a timestamp in a Splunk dashboard in a specific timezone, regardless of what user preferences people have configured. The reason for this requirement is that we have several members located globally that have a legitimate/more frequent need to have their own timezone (so we can't ask them to change to Eastern) but the dashboard in question specifically needs to report on issues using Eastern time (they need to look the same for everyone). I feel like there must be some simple way to do this that I just haven't found. I'm not doing anything complicated right now, I'm just converting a UNIX timestamp with strftime: | eval openTime=strftime(openTime,"%m/%d/%Y %H:%M:%S") | eval closedTime=strftime(closedTime,"%m/%d/%Y %H:%M:%S") When I display them in a table they display in whatever the user preference is for timezone. Every solution I've tried doesn't really seem to be a solution. It is easy to convert a timestamp with a timezone to unix. It is also easy to convert unix to a timestamp that shows your local timezone... but so far it seems impossible to convert a unix timestamp to a specific timezone (and have it display in that timezone instead of whatever the user has configured). Thoughts? To re-iterate, we cannot ask these users to change their timezone preference but these times MUST be shown in Eastern.
When I access my dashboard, I can see all the data and graphs on the panels except for one specific graph when I choose the "Prior 7 Days" option.  All the other date ranges populate as expected. I'v... See more...
When I access my dashboard, I can see all the data and graphs on the panels except for one specific graph when I choose the "Prior 7 Days" option.  All the other date ranges populate as expected. I've checked the source code and everything seems to be working properly. How can I get the graph/visualization for the one specific graph to show for the "Prior 7 Days"?
Sometimes we are not getting intermediate updates from service now to splunk in this case I need to send an alert saying that this update is missing in splunk. Can someone please with this issue
Hello, I'm trying to figure out how to do 3 months of HOT/WARM/COLD indexing but copy/forward logs every week to my frozen archive located in a separate location. I'm trying to compensate for some ... See more...
Hello, I'm trying to figure out how to do 3 months of HOT/WARM/COLD indexing but copy/forward logs every week to my frozen archive located in a separate location. I'm trying to compensate for some issues we are having with our infrastructure uptime.  Q: Does this make sense and is this possible? Could anyone provide examples or advice? Q: Is there a difference is storage space used by sending data in weekly vs monthly(or every 90 days)? Also, Splunk is installed into a Windows Environment. Thank You, Sean
Hi Experts, I have installed an application in windows server which uses 3 services(like AAA, BBB, CCC) to measure the availability of the services. I would like to ingest the status of those 3 ser... See more...
Hi Experts, I have installed an application in windows server which uses 3 services(like AAA, BBB, CCC) to measure the availability of the services. I would like to ingest the status of those 3 services to Splunk to showcase/display the availability of the application. I'm using universal forwarder in windows server and also installed Splunk_TA_Windows, but not sure how to filter and ingest only for 3 specific services. Please help in ingesting the data. Regards, Karthikeyan.SV