All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey! So Im using an EC2 splunk ami and have all the correct apps loaded but cannot for the life of me get the boss v1 data in my environment.  I've put it into $SPLUNK_HOME/etc/apps (as mentioned in... See more...
Hey! So Im using an EC2 splunk ami and have all the correct apps loaded but cannot for the life of me get the boss v1 data in my environment.  I've put it into $SPLUNK_HOME/etc/apps (as mentioned in github) and it did not work, it simply does not pick up that this is a data set and instead is comfortably in my apps.  Loading it in other ways means it doesnt come through correctly.  Is this a timestamp issue?   Any help would be so appreciated    
How Do I get the instance for heavy forwader for my vm box for set up. Is the UF instance differ from HF instance?  
Hello sir/madam, I tried to migrate data from Events services 4.5.x to 23.x and I followed steps mentioned in https://docs.appdynamics.com/appd/onprem/23.2/en/events-service-deployment/data-migratio... See more...
Hello sir/madam, I tried to migrate data from Events services 4.5.x to 23.x and I followed steps mentioned in https://docs.appdynamics.com/appd/onprem/23.2/en/events-service-deployment/data-migration-from-events-service-4-5-x-to-23-x/migrate-events-service-from-version-4-5-x-to-23-x. When I finished those steps, an error occured in the log files . After that, I submited the new events service in  admin.jsp and then some data lost. You can observe the thrown exception in the attahced image. @Ryan.Paredez  I look forward to hearing from you. Regards Hamed
We are using Microsoft Graph Security API Add-On to ingest all security alerts for our org using the Microsoft Graph Security API. Recently access token became invalid and the data paused. We renewed... See more...
We are using Microsoft Graph Security API Add-On to ingest all security alerts for our org using the Microsoft Graph Security API. Recently access token became invalid and the data paused. We renewed the secret key and updated with new one. Till then data ingestion is paused and no backfill happened after that. Is there any way I can backfill that paused one week data. Any help or point of contacts or link to read would be beneficial? Is that something to do in source end to re-ingest the data via Add-On again?
Hi there, I have this query: index=_internal source="*license_usage.log" | eval bytes=b | eval GB = round(bytes/1024/1024/1024,3) | timechart span=1d sum(GB) by h This query shows results like th... See more...
Hi there, I have this query: index=_internal source="*license_usage.log" | eval bytes=b | eval GB = round(bytes/1024/1024/1024,3) | timechart span=1d sum(GB) by h This query shows results like this: _time host1 .... 2023-11-10     2023-11-11     ...       And I want results like this: Host 2023-11-10 .... host1     host2     ...       How I can do this?
Hello, I have a problem, how do I get a fourth value from a table and scatterplot to use the value in a token? We have a table with 4 columns and a scatterplot chart for display. article, value, ... See more...
Hello, I have a problem, how do I get a fourth value from a table and scatterplot to use the value in a token? We have a table with 4 columns and a scatterplot chart for display. article, value, calculate category and PartnerId Unfortunately, the PartnerId is not displayed in the scatterplot chart. Can I somehow read out the fourth value to display further details on a PartnerID in a dashboard?
Sometimes, running the same search generates different orders when trellis visualization is used.  For example,   ((sourcetype=A field1=*) OR (sourcetype=B user=* field2=*)) clientip=* earliest="0... See more...
Sometimes, running the same search generates different orders when trellis visualization is used.  For example,   ((sourcetype=A field1=*) OR (sourcetype=B user=* field2=*)) clientip=* earliest="01/24/2023:02:00:00" latest="01/24/2023:08:00:00" | fields clientip user field1 field2 | eval user = mvindex(split(user, ":"), 1) | eventstats values(user) as user by clientip | eval clientip = clientip . if(isnull(user), "/", "/" . mvjoin(user, ",")) | timechart span=5m limit=19 count(field1) as s1 count(field2) as s2 by clientip   Here, field1 only exist in sourcetype A, user and field2 only exist in sourcetype B; search period is fixed in the past.  This means that search result cannot change.  But the following are two screenshots of two consecutive executions.   They show the same number of trellis with exact same clientip titles; each clientip's graph is also the same across the two runs.  But obviously the order is rearranged. (In Statistics view, columns are arranged in lexicographic order of "s1:clientip" and "s2:clientip".) Is there some way to be certain of the order?
Greetings, I have Splunk 9.1.1 trying to import an aruba 7210 into splunk using the aruba app with udp 514. Sourcetype: aruba:syslog. I have other devices (cisco) going into the same splunk instance ... See more...
Greetings, I have Splunk 9.1.1 trying to import an aruba 7210 into splunk using the aruba app with udp 514. Sourcetype: aruba:syslog. I have other devices (cisco) going into the same splunk instance and they are reporting ok.  the splunk server can ping the aruba and vice versa.  should i try any other source types?  anything else I should look for to search under the hood to see why communication is not occurring? Thank you,
I have a use case that requires logging to be captured and have following this document here: How do I set up the ForgeRock Identity Cloud app for Splunk? Which references --> https://splunkbas... See more...
I have a use case that requires logging to be captured and have following this document here: How do I set up the ForgeRock Identity Cloud app for Splunk? Which references --> https://splunkbase.splunk.com/app/6272 ForgeRock Identity Cloud App for Splunk captures audit and debug logs from ForgeRock Identity Cloud tenants. A sample dashboard is included to graphically illustrate various captured metrics, for example, authentication events, identity registrations, and top-active users. Sample searches are also included to extend or modify the sample dashboard. Problem is the app should not be calling the following endpoint: /monitoring/logs/tail  It should be calling the following endpoint as noted in the ForgeRock Product Documentation-> /monitoring/logs To reduce unwanted stresses on the system, Identity Cloud limits the number of requests you can make to the /monitoring/logs endpoint in a certain timeframe: The page-size limit is 1000 logs per request. The request limit is 60 requests per minute. The theoretical upper rate limit is therefore 60,000 logs per minute. The reason this needs to be changed is when  using the Logs tail endpoint The /monitoring/logs/tail endpoint has the same limits and response headers as the /monitoring/logs endpoint described above. However, the endpoint also has a limit of 20,000 lines per request, which supersedes the page-size limit of 1000 logs per request. Because calls to the /monitoring/logs/tail endpoint do not always fetch all logs, use this endpoint for debugging only. Use the /monitoring/logs endpoint when you need to fetch all logs. I did find: grep -i -R "/tail" forgerock/   Which pointed me to :   forgerock//bin/input_module_forgerock.py:        response = helper.send_http_request(forgerock_id_cloud_tenant_url + "/monitoring/logs/tail", 'GET', parameters=parameters, payload=None, headers=headers, cookies=None, verify=True, cert=None, timeout=60, use_proxy=False)   Lines 51-52 of input_module_forgerock.py shows:   # The following examples send rest requests to some endpoint. response = helper.send_http_request(forgerock_id_cloud_tenant_url + "/monitoring/logs/tail", 'GET', parameters=parameters, payload=None, headers=headers, cookies=None, verify=True, cert=None, timeout=60, use_proxy=False) I suspect updating this to the following /monitoring/logs may resolve this and restarting the app:   # The following examples send rest requests to some endpoint. response = helper.send_http_request(forgerock_id_cloud_tenant_url + "/monitoring/logs", 'GET', parameters=parameters, payload=None, headers=headers, cookies=None, verify=True, cert=None, timeout=60, use_proxy=False) But when trying to grab logs its failing: 2023-11-16 15:33:34,178 DEBUG pid=261576 tid=MainThread file=connectionpool.py:_make_request:461 | https://openam-testxyz.id.forgerock.io:443 "GET /monitoring/logs?source=am-authentication%2Cam-access%2Cam-config%2Cidm-activity&_pagedResultsCookie=eyJfc29ydEzbnRpY25Il19fQ HTTP/1.1" 500 74 2023-11-16 15:33:34,179 INFO pid=261576 tid=MainThread file=base_modinput.py:log_info:295 | Unexpected response from ForgeRock: 500 2023-11-16 15:33:34,179 ERROR pid=261576 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Login Traceback (most recent call last): File "/opt/splunk/etc/apps/forgerock/bin/forgerock/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/forgerock/bin/forgerock.py", line 76, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/forgerock/bin/input_module_forgerock.py", line 60, in collect_events response.raise_for_status() File "/opt/splunk/etc/apps/forgerock/bin/forgerock/aob_py3/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) Hoping someone has an idea @jknight 
I have the below search and I'm trying to search for different time periods within each search.  So for example msg="*Completed *" is using the timepicker input.  I would like to search for data on... See more...
I have the below search and I'm trying to search for different time periods within each search.  So for example msg="*Completed *" is using the timepicker input.  I would like to search for data one hour before the timepicker search (so this should be dynamic) for msg="*First *" I'm not sure if this is possible. I'm comparing these two searches and the initial log msg="*First*" can occur several minutes before the msg=*Completed*" log. So when I compare some of these log messages get cut off depending on when I select my timepicker. I would like to search for these message 1 hour before my timepicker selection.  Long term this search will go into a splunk dashboard.    (index=color name IN ("green","blue") msg="*First *" ```earliest="11/09/2023:09:00:00" latest="11/09/2023:12:59:59"```) OR (index=color name IN ("blue2","green2") msg="*Completed *")    
Hi,  I have two problems with a log line. 1) I have a log line that occasionally is inserted.  It is a schedule, and i wish to extract the data from it. The entry has values that are eventTit... See more...
Hi,  I have two problems with a log line. 1) I have a log line that occasionally is inserted.  It is a schedule, and i wish to extract the data from it. The entry has values that are eventTitle= However, Splunk is only pulling the first occurrence from the log line and ignoring the rest. so i get; eventTitle=BooRadley in my fields, instead of eventTitle=BooRadley eventTitle=REGGAE-2 eventTitle=CHRISTIAN MISSION   I have tried using regex and | kv pairdelim="=", kvdelim="," I am unsure if a line break would work as they are referenced to SArts - This is a field extracted via regex and changes. 2) The log line is about 9999 characters long with spaces, and not all the log line is ingested - I think i need to create a limits.conf file?  Below is an abridged extract of the log line   20231117154211 [18080-exec-9] INFO EventConversionService () - SArts: VUpdate(system=GRP1-VIPE, channelCode=UH, type=NextEvents, events=[Event(onAir=true, eventNumber=725538339, utcStartDateTime=2023-11-17T15:42:10.160Z, duration=00:00:05.000, eventTitle=BooRadley, contentType=Prog ), Event(onAir=false, eventNumber=725538313, utcStartDateTime=2023-11-17T15:42:15.160Z, duration=00:00:02.000, eventTitle= REGGAE-2, contentType=Bumper), Event(onAir=false, eventNumber=725538320, utcStartDateTime=2023-11-17T15:42:17.160Z, duration=00:01:30.000, eventTitle=CHRISITAN MISSION , contentType=Commercial), Event…   This is my code so far;   | rex "\-\s+(?<channel_name>.+)\:\sVUpdate" | stats values(eventNumber) by channel_name channelCode utcStartDateTime eventTitle duration  
We are using this license: Splunk Enterprise Term License - No Enforcement 6.5 I am an administrator, when I try to create a new alert, I get "server error", also, when I check the splunkd log, I se... See more...
We are using this license: Splunk Enterprise Term License - No Enforcement 6.5 I am an administrator, when I try to create a new alert, I get "server error", also, when I check the splunkd log, I see the following:     11-17-2023 11:03:02.381 +0000 ERROR AdminManager - Argument "app" is not supported by this handler.     I investigated all of this after seeing these warnings in the scheduler.log:     11-17-2023 07:35:00.513 +0000 WARN SavedSplunker - Savedsearch scheduling cannot be inherited from another user's search. Schedule ignored for savedsearch_id="nobody;search;Proxy NGINX Errors Alert" 11-17-2023 07:35:00.513 +0000 WARN SavedSplunker - Savedsearch scheduling cannot be inherited from another user's search. Schedule ignored for savedsearch_id="nobody;search;Proxy issue" 11-17-2023 07:35:00.513 +0000 WARN SavedSplunker - Savedsearch scheduling cannot be inherited from another user's search. Schedule ignored for savedsearch_id="nobody;search;Failed linux logins Clone8"     I also saw the license manager, sometimes we are exceding the quota, but as far as i investigated, this doesnt remove the alerting functionalities...
I am trying to output two rows of data with them being "read" and "write" with both of them having min,max, and avg of some values. Currently I am only able to display one row and I don't know Splunk... See more...
I am trying to output two rows of data with them being "read" and "write" with both of them having min,max, and avg of some values. Currently I am only able to display one row and I don't know Splunk well enough to use the other set of spath variables to display the other row.  This is my search and output. index="collectd_test" plugin=disk type=disk_octets plugin_instance=dm-0 | spath output=values0 path=values{0} | spath output=values1 path=values{1} | spath output=dsnames0 path=dsnames{0} | spath output=dsnames1 path=dsnames{1} | stats min(values0) as min max(values0) as max avg(values0) as avg by dsnames0 | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)            
Device_ID : 1 A.txt 2021-07-06 23:30:34.2379| Started! 2021-07-06 23:30:34.6808|3333|-0.051|0.051|0.008|0.016 Device_ID : 1 E.txt 2021-07-13 18:28:26.7769|** 2021-07-13 18:28:27.1363|aa Dev... See more...
Device_ID : 1 A.txt 2021-07-06 23:30:34.2379| Started! 2021-07-06 23:30:34.6808|3333|-0.051|0.051|0.008|0.016 Device_ID : 1 E.txt 2021-07-13 18:28:26.7769|** 2021-07-13 18:28:27.1363|aa Device_ID : 2 E.txt 2016-03-02 13:56:06.9283|** 2016-03-02 13:56:07.3333|ff Device_ID : 2 A.txt 2020-03-02 13:42:30.0111| Started! 2020-03-02 13:42:30.0111|444|-0.051|0.051|0.008|0.016 Query: index="xx" source="*A.txt" | eval Device_ID=mvindex(split(source,"/"),5) | reverse | table Device_ID _raw | rex field=_raw "(?<timestamp>[^|]+)\|(?<Probe_ID>[^|]+)" | table Device_ID timestamp Probe_ID | rex mode=sed field=timestamp "s/\\\\x00/ /g" | table Device_ID timestamp Probe_ID | eval time=strptime(timestamp,"%F %T.%4N") | streamstats global=f max(time) as latest_time by Device_ID | where time >= latest_time | eval _time=strptime(timestamp,"%Y-%m-%d %H:%M:%S.%4N") | table Device_ID _time Probe_ID |join type=left Device_ID [ search index="xx" source="*E.txt" | eval Device_ID=mvindex(split(source,"/"),5) | reverse | rex field=_raw "(?<timestamp>[^|]+)" | stats first(timestamp) as earliesttime last(timestamp) as latesttime by Device_ID |table Device_ID earliesttime latesttime ] |where _time >= strptime(earliesttime, "%Y-%m-%d %H:%M:%S.%4N") AND _time <= strptime(latesttime, "%Y-%m-%d %H:%M:%S.%4N") |search Device_ID="1"   Filtering events based on E.txt earliest timestamp on A.txt. It is working for Device_ID 1 and not for Device_ID 2. Both logs are same format. It is not generating earliest and latest timestamp for device_ID 2. If i run subsearch alone, it is generating.
TC Execution Summary for Last Quarter No. of job runs AUS JER IND ASI August 150 121 110 200 Sept 200 140 150 220 Oct 100 160 130 420 I want to write a query fo... See more...
TC Execution Summary for Last Quarter No. of job runs AUS JER IND ASI August 150 121 110 200 Sept 200 140 150 220 Oct 100 160 130 420 I want to write a query for the above table 
For one of our SQL server running with UF version 9.1.1 I can see a lot of error reporting with event code=4506 with the below message. When i check the application logs for every 60 minutes i can s... See more...
For one of our SQL server running with UF version 9.1.1 I can see a lot of error reporting with event code=4506 with the below message. When i check the application logs for every 60 minutes i can see around 744,252 events and the error message as below so kindly let me know how can i get them fixed. 11/17/2023 06:15:23 AM LogName=Application EventCode=4506 EventType=2 ComputerName=abc.def.xyz SourceName=HealthService Type=Error RecordNumber=xxxxxxxxxx Keywords=Classic TaskCategory=None OpCode=None Message=Splunk could not get the description for this event. Either the component that raises this event is not installed on your local computer or the installation is corrupt. FormatMessage error: Got the following information from this event: AB-Prod Microsoft.SQLServer.Windows.CollectionRule.DatabaseReplica.FileBytesReceivedPerSecond abc\ABC_PROD.WSS_Content_internal_portal_xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}   So how can we get them fixed. Kindly help on the same.
After installation and configuration of machine agent on local machine to collect metrics, Metrics are not populating properly and data which is displayed is not complete. We are not able to see CPU ... See more...
After installation and configuration of machine agent on local machine to collect metrics, Metrics are not populating properly and data which is displayed is not complete. We are not able to see CPU percentage, Memory percentage etc.. Please do suggest how to pull in complete metrics into AppDynamics, Is there any configurations file changes needed or any config changes in AppDynamics UI
I have a Splunk Cloud instance where we send logs from servers with the Universal Forwarder installed. All UF are managed by a Deployment Server. My questión is: what are the best practices on howw ... See more...
I have a Splunk Cloud instance where we send logs from servers with the Universal Forwarder installed. All UF are managed by a Deployment Server. My questión is: what are the best practices on howw to organize apps, both Splunkbase downloaded and in-house built and also configuration-only apps, if they are a best practice? Right now we are experimenting with deploying the Splunkbase apps as they are (easier to update them) and deploying the configuration in an extra app named starting with numbers so its configuration takes precedence. But we have run into some issues in the past with this approach.
Hi All, I have a requirement to Onboard Data from a website like http://1.1.1.1:1234/status/v2 and its a vendor managed API url so Application team cannot use the HEC Token option. so I have prepar... See more...
Hi All, I have a requirement to Onboard Data from a website like http://1.1.1.1:1234/status/v2 and its a vendor managed API url so Application team cannot use the HEC Token option. so I have prepared the script to get the Data and tested it Locally and the script works as expected. I have created a forwarder app with bin folder and kept the script in that and pushed the App to one of our Integration Forwarder but unable to get any data in Splunk. I have tested the connectivity between our IF and the URL and its successful( Did a Curl to that URL and able to see the URL content) I have checked firewall and permissions , all seems to be ok but still I am unable to get data in splunk. Also I checked internal index but don't find anything there. Can someone guide me what else I need to check in order to get this fixed. Below is my inputs: [monitor://./bin/abc.sh] index=xyz disabled=false interval = 500 sourcetype=script:abc source=abc.sh I have also created props as below: [script:abc DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = true 
Hello Experts,   This is a long searches, explored query that I am getting a way around. If we do a simple query like this     index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILE... See more...
Hello Experts,   This is a long searches, explored query that I am getting a way around. If we do a simple query like this     index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by country, type, ProductCode | fields country, ProductCode, type, Failed_Count, Passed_Count, Total     This above simple query gives me a result table where the total belongs to the specific country and productCode i.e. individual Total Now there is this field 'errorinfo' -  what I want is that I want to show the 'errorinfo' if its "codeerror"  as well in the above list like this   index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by country, type, ProductCode, errorinfo | fields country, ProductCode, type, Failed_Count, Passed_Count, errorinfo, Total   This table shows results like this below country ProductCode type Failed_Count Passed_Count errorinfo Total usa 111 1c 4 0 wrong code value 4 usa 111 1c 6 0 wrong field selected 6 usa 111 1c 0 60 NA 70   How can I do so that I can see the results like this where Total remains the complete total  of field txnStatus (FAILED+SUCCEEDED) like below table - If I can achieve this I can do % total as well, if you see the Total belongs to one country - usa total shows usa total and canada total shows can total   country ProductCode type Failed_Count errorinfo Total usa 111 1c 4 wrong code value 70 usa 111 1c 6 wrong field selected 70 can 222 1b 2 wrong entry 50 can 222 1b 6 code not found 50 country ProductCode type Failed_Count errorinfo Total usa 111 1c 4 wrong code value 70 usa 111 1c 6 wrong field selected 70     Thanks in advance Nishant