All Topics

Top

All Topics

Hello, I have some issues to perform field extractions using transform configuration. It's not giving field value pairs as expected. Sample events and configuration files are given below. Some non-u... See more...
Hello, I have some issues to perform field extractions using transform configuration. It's not giving field value pairs as expected. Sample events and configuration files are given below. Some non-uniformities within the events are also marked in Bold. Any recommendations will be highly appreciated. Thank you so much. My Configuration Files [mypropfConf] REPORT-mytranforms=myTransConf [myTransConf] REGEX = ([^"]+?):'([^"]+?)' FORMAT = $1::$2 Sample Events 2023-11-15T18:56:29.098Z OTESTN097MA4515620 TESTuser20248: UserID: '90A', UserType: 'TempEMP', System: 'TEST', UAT: 'UTA-True', EventType: 'TEST', EventID: 'Lookup', Subject: 'A5367817222', Scode: '' EventStatus: 0, TimeStamp: '2023-11-03T15:56:29.099Z', Device: 'OTESTN097MA4513020', Msg: 'lookup ok', var: 'Sec' 2023-11-15T18:56:29.021Z OTESTN097MB7513020 TESTuser20249: UserID: '95B', UserType: 'TempEMP', System: 'TEST', UAT: 'UTA-True', EventType: 'TEST', EventID: 'Lookup', Subject: 'A516670222', Scode: '' EventStatus: 0, TimeStamp: '2023-11-03T15:56:29.099Z', Device: 'OTESTN097MA4513020', Msg: 'lookup ok', var: 'tec' 2023-11-15T18:56:29.009Z OTESTN097MB9513020 TESTuser20248: UserID: '95A', UserType: 'TempEMP', System: 'TEST', UAT: 'UTA-True', EventType: 'TEST', EventID: 'Lookup', Subject: 'A546610222', Scode: '' EventStatus: 0, TimeStamp: '2023-11-03T15:56:29.099Z', Device: 'OTESTN097MA4513020', Msg: 'lookup ok', var: 'test'  
I have tried to simplify the query for better understanding and removing some unnecessary things. This query is to find out if the same malware has been found on more than 4 hosts (dest) in a give... See more...
I have tried to simplify the query for better understanding and removing some unnecessary things. This query is to find out if the same malware has been found on more than 4 hosts (dest) in a given time span, something like a malware outbreak. Below is the indexed based query that works fine. I am trying to convert this to a data model based query, but not getting the desired results. I am new to writing data model based queries. Thanks for all the help! (`cim_Malware_indexes`) tag=malware tag=attack | eval grouping_signature=if(isnotnull(file_name),signature . ":" . file_name,signature) => trying to create a new field called "grouping_signature" by concatenating signature and file_name fields | stats count dc(dest) as infected_device_count BY grouping_signature => trying to calculate the distinct count of hosts the have the same malware found on them by "grouping_signature" field | where infected_device_count > 4 => trying to find events where number of infected devices is greater than 4 | stats sum(count) AS "count" sum(infected_device_count) AS infected_device_count BY grouping_signature => trying to find the total number of infected hosts by "grouping_signature" field
Hi all, I'm having difficulty crafting regex that will extract a field that can have either 1 or multiple words. Using the "add field" in Splunk Enterprise doesn't seem to be able to get the job do... See more...
Hi all, I'm having difficulty crafting regex that will extract a field that can have either 1 or multiple words. Using the "add field" in Splunk Enterprise doesn't seem to be able to get the job done either.  The field I would like to extract is for the "Country" which can be 1 word or multiple words. Any help would be appreciated. Below is my regex and a sample of the logs from which I am trying to extract fields. I don't consider myself to be a regex guru so don't laugh at my field extraction regex. It works on everything except The country. User\snamed\s(\w+\s\w+)\sfrom\s(\w+)\sdepartment\saccessed\sthe\sresource\s(\w+\.\w{3})(\/\w+\.*\/*\w+\.*\w{0,4})\sfrom\sthe\ssource\sIP\s(\d+\.\d+\.\d+\.\d+)\sand\scountry\s\W(\w+\s*)   11/17/23 2:25:22.000 PM [Network-log]: User named Linda White from IT department accessed the resource Cybertees.THM/signup.html from the source IP 10.0.0.2 and country France at: Fri Nov 17 14:25:22 2023 host = ***** source = networks sourcetype = network_logs [Network-log]: User named Robert Wilson from HR department accessed the resource Cybertees.THM/signup.html from the source IP 10.0.0.1 and country United States at: Fri Nov 17 14:25:11 2023 host = ***** source = networks sourcetype = network_logs 11/17/23 2:25:21.000 PM [Network-log]: User named Christopher Turner from HR department accessed the resource Cybertees.THM/products/product2.html from the source IP 192.168.0.100 and country Germany at: Fri Nov 17 14:25:17 2023 host = ***** source = networks sourcetype = network_logs
Hi all I created an environment with following instances: cluster master three search heads four indexers heavy forwarder license server deployment server deployer We have more that 50 cli... See more...
Hi all I created an environment with following instances: cluster master three search heads four indexers heavy forwarder license server deployment server deployer We have more that 50 clients so that I deployed the deployment server on a dedicated server. We have some indexes but one of them (say index named A) have about 35K per minute events. The heavy forwarder load balances the events between four indexers. The replication factor is 4 and the search factor is 3. A simple search like 'index=A' can return about 17M events at about 5 minutes. I want to speed up the search on the index A. I can change whole deployment and environment if anyone has an idea about speeding up the search.I would be grateful If anyone could help me about parameters like replication factor or search factor, number of indexers and... to speed up the search. Thank you.
Hey! So Im using an EC2 splunk ami and have all the correct apps loaded but cannot for the life of me get the boss v1 data in my environment.  I've put it into $SPLUNK_HOME/etc/apps (as mentioned in... See more...
Hey! So Im using an EC2 splunk ami and have all the correct apps loaded but cannot for the life of me get the boss v1 data in my environment.  I've put it into $SPLUNK_HOME/etc/apps (as mentioned in github) and it did not work, it simply does not pick up that this is a data set and instead is comfortably in my apps.  Loading it in other ways means it doesnt come through correctly.  Is this a timestamp issue?   Any help would be so appreciated    
How Do I get the instance for heavy forwader for my vm box for set up. Is the UF instance differ from HF instance?  
Hello sir/madam, I tried to migrate data from Events services 4.5.x to 23.x and I followed steps mentioned in https://docs.appdynamics.com/appd/onprem/23.2/en/events-service-deployment/data-migratio... See more...
Hello sir/madam, I tried to migrate data from Events services 4.5.x to 23.x and I followed steps mentioned in https://docs.appdynamics.com/appd/onprem/23.2/en/events-service-deployment/data-migration-from-events-service-4-5-x-to-23-x/migrate-events-service-from-version-4-5-x-to-23-x. When I finished those steps, an error occured in the log files . After that, I submited the new events service in  admin.jsp and then some data lost. You can observe the thrown exception in the attahced image. @Ryan.Paredez  I look forward to hearing from you. Regards Hamed
We are using Microsoft Graph Security API Add-On to ingest all security alerts for our org using the Microsoft Graph Security API. Recently access token became invalid and the data paused. We renewed... See more...
We are using Microsoft Graph Security API Add-On to ingest all security alerts for our org using the Microsoft Graph Security API. Recently access token became invalid and the data paused. We renewed the secret key and updated with new one. Till then data ingestion is paused and no backfill happened after that. Is there any way I can backfill that paused one week data. Any help or point of contacts or link to read would be beneficial? Is that something to do in source end to re-ingest the data via Add-On again?
Hi there, I have this query: index=_internal source="*license_usage.log" | eval bytes=b | eval GB = round(bytes/1024/1024/1024,3) | timechart span=1d sum(GB) by h This query shows results like th... See more...
Hi there, I have this query: index=_internal source="*license_usage.log" | eval bytes=b | eval GB = round(bytes/1024/1024/1024,3) | timechart span=1d sum(GB) by h This query shows results like this: _time host1 .... 2023-11-10     2023-11-11     ...       And I want results like this: Host 2023-11-10 .... host1     host2     ...       How I can do this?
Hello, I have a problem, how do I get a fourth value from a table and scatterplot to use the value in a token? We have a table with 4 columns and a scatterplot chart for display. article, value, ... See more...
Hello, I have a problem, how do I get a fourth value from a table and scatterplot to use the value in a token? We have a table with 4 columns and a scatterplot chart for display. article, value, calculate category and PartnerId Unfortunately, the PartnerId is not displayed in the scatterplot chart. Can I somehow read out the fourth value to display further details on a PartnerID in a dashboard?
Sometimes, running the same search generates different orders when trellis visualization is used.  For example,   ((sourcetype=A field1=*) OR (sourcetype=B user=* field2=*)) clientip=* earliest="0... See more...
Sometimes, running the same search generates different orders when trellis visualization is used.  For example,   ((sourcetype=A field1=*) OR (sourcetype=B user=* field2=*)) clientip=* earliest="01/24/2023:02:00:00" latest="01/24/2023:08:00:00" | fields clientip user field1 field2 | eval user = mvindex(split(user, ":"), 1) | eventstats values(user) as user by clientip | eval clientip = clientip . if(isnull(user), "/", "/" . mvjoin(user, ",")) | timechart span=5m limit=19 count(field1) as s1 count(field2) as s2 by clientip   Here, field1 only exist in sourcetype A, user and field2 only exist in sourcetype B; search period is fixed in the past.  This means that search result cannot change.  But the following are two screenshots of two consecutive executions.   They show the same number of trellis with exact same clientip titles; each clientip's graph is also the same across the two runs.  But obviously the order is rearranged. (In Statistics view, columns are arranged in lexicographic order of "s1:clientip" and "s2:clientip".) Is there some way to be certain of the order?
Greetings, I have Splunk 9.1.1 trying to import an aruba 7210 into splunk using the aruba app with udp 514. Sourcetype: aruba:syslog. I have other devices (cisco) going into the same splunk instance ... See more...
Greetings, I have Splunk 9.1.1 trying to import an aruba 7210 into splunk using the aruba app with udp 514. Sourcetype: aruba:syslog. I have other devices (cisco) going into the same splunk instance and they are reporting ok.  the splunk server can ping the aruba and vice versa.  should i try any other source types?  anything else I should look for to search under the hood to see why communication is not occurring? Thank you,
I have a use case that requires logging to be captured and have following this document here: How do I set up the ForgeRock Identity Cloud app for Splunk? Which references --> https://splunkbas... See more...
I have a use case that requires logging to be captured and have following this document here: How do I set up the ForgeRock Identity Cloud app for Splunk? Which references --> https://splunkbase.splunk.com/app/6272 ForgeRock Identity Cloud App for Splunk captures audit and debug logs from ForgeRock Identity Cloud tenants. A sample dashboard is included to graphically illustrate various captured metrics, for example, authentication events, identity registrations, and top-active users. Sample searches are also included to extend or modify the sample dashboard. Problem is the app should not be calling the following endpoint: /monitoring/logs/tail  It should be calling the following endpoint as noted in the ForgeRock Product Documentation-> /monitoring/logs To reduce unwanted stresses on the system, Identity Cloud limits the number of requests you can make to the /monitoring/logs endpoint in a certain timeframe: The page-size limit is 1000 logs per request. The request limit is 60 requests per minute. The theoretical upper rate limit is therefore 60,000 logs per minute. The reason this needs to be changed is when  using the Logs tail endpoint The /monitoring/logs/tail endpoint has the same limits and response headers as the /monitoring/logs endpoint described above. However, the endpoint also has a limit of 20,000 lines per request, which supersedes the page-size limit of 1000 logs per request. Because calls to the /monitoring/logs/tail endpoint do not always fetch all logs, use this endpoint for debugging only. Use the /monitoring/logs endpoint when you need to fetch all logs. I did find: grep -i -R "/tail" forgerock/   Which pointed me to :   forgerock//bin/input_module_forgerock.py:        response = helper.send_http_request(forgerock_id_cloud_tenant_url + "/monitoring/logs/tail", 'GET', parameters=parameters, payload=None, headers=headers, cookies=None, verify=True, cert=None, timeout=60, use_proxy=False)   Lines 51-52 of input_module_forgerock.py shows:   # The following examples send rest requests to some endpoint. response = helper.send_http_request(forgerock_id_cloud_tenant_url + "/monitoring/logs/tail", 'GET', parameters=parameters, payload=None, headers=headers, cookies=None, verify=True, cert=None, timeout=60, use_proxy=False) I suspect updating this to the following /monitoring/logs may resolve this and restarting the app:   # The following examples send rest requests to some endpoint. response = helper.send_http_request(forgerock_id_cloud_tenant_url + "/monitoring/logs", 'GET', parameters=parameters, payload=None, headers=headers, cookies=None, verify=True, cert=None, timeout=60, use_proxy=False) But when trying to grab logs its failing: 2023-11-16 15:33:34,178 DEBUG pid=261576 tid=MainThread file=connectionpool.py:_make_request:461 | https://openam-testxyz.id.forgerock.io:443 "GET /monitoring/logs?source=am-authentication%2Cam-access%2Cam-config%2Cidm-activity&_pagedResultsCookie=eyJfc29ydEzbnRpY25Il19fQ HTTP/1.1" 500 74 2023-11-16 15:33:34,179 INFO pid=261576 tid=MainThread file=base_modinput.py:log_info:295 | Unexpected response from ForgeRock: 500 2023-11-16 15:33:34,179 ERROR pid=261576 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Login Traceback (most recent call last): File "/opt/splunk/etc/apps/forgerock/bin/forgerock/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/forgerock/bin/forgerock.py", line 76, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/forgerock/bin/input_module_forgerock.py", line 60, in collect_events response.raise_for_status() File "/opt/splunk/etc/apps/forgerock/bin/forgerock/aob_py3/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) Hoping someone has an idea @jknight 
I have the below search and I'm trying to search for different time periods within each search.  So for example msg="*Completed *" is using the timepicker input.  I would like to search for data on... See more...
I have the below search and I'm trying to search for different time periods within each search.  So for example msg="*Completed *" is using the timepicker input.  I would like to search for data one hour before the timepicker search (so this should be dynamic) for msg="*First *" I'm not sure if this is possible. I'm comparing these two searches and the initial log msg="*First*" can occur several minutes before the msg=*Completed*" log. So when I compare some of these log messages get cut off depending on when I select my timepicker. I would like to search for these message 1 hour before my timepicker selection.  Long term this search will go into a splunk dashboard.    (index=color name IN ("green","blue") msg="*First *" ```earliest="11/09/2023:09:00:00" latest="11/09/2023:12:59:59"```) OR (index=color name IN ("blue2","green2") msg="*Completed *")    
AppDynamics recently published a Security Advisory regarding a medium-severity vulnerability in the installer script of the PHP Agent (CVE-2023-20274). You can find details and guidance in our publ... See more...
AppDynamics recently published a Security Advisory regarding a medium-severity vulnerability in the installer script of the PHP Agent (CVE-2023-20274). You can find details and guidance in our public Security Advisory, in the documentation under Product Announcements and Alerts.
Hi,  I have two problems with a log line. 1) I have a log line that occasionally is inserted.  It is a schedule, and i wish to extract the data from it. The entry has values that are eventTit... See more...
Hi,  I have two problems with a log line. 1) I have a log line that occasionally is inserted.  It is a schedule, and i wish to extract the data from it. The entry has values that are eventTitle= However, Splunk is only pulling the first occurrence from the log line and ignoring the rest. so i get; eventTitle=BooRadley in my fields, instead of eventTitle=BooRadley eventTitle=REGGAE-2 eventTitle=CHRISTIAN MISSION   I have tried using regex and | kv pairdelim="=", kvdelim="," I am unsure if a line break would work as they are referenced to SArts - This is a field extracted via regex and changes. 2) The log line is about 9999 characters long with spaces, and not all the log line is ingested - I think i need to create a limits.conf file?  Below is an abridged extract of the log line   20231117154211 [18080-exec-9] INFO EventConversionService () - SArts: VUpdate(system=GRP1-VIPE, channelCode=UH, type=NextEvents, events=[Event(onAir=true, eventNumber=725538339, utcStartDateTime=2023-11-17T15:42:10.160Z, duration=00:00:05.000, eventTitle=BooRadley, contentType=Prog ), Event(onAir=false, eventNumber=725538313, utcStartDateTime=2023-11-17T15:42:15.160Z, duration=00:00:02.000, eventTitle= REGGAE-2, contentType=Bumper), Event(onAir=false, eventNumber=725538320, utcStartDateTime=2023-11-17T15:42:17.160Z, duration=00:01:30.000, eventTitle=CHRISITAN MISSION , contentType=Commercial), Event…   This is my code so far;   | rex "\-\s+(?<channel_name>.+)\:\sVUpdate" | stats values(eventNumber) by channel_name channelCode utcStartDateTime eventTitle duration  
We are using this license: Splunk Enterprise Term License - No Enforcement 6.5 I am an administrator, when I try to create a new alert, I get "server error", also, when I check the splunkd log, I se... See more...
We are using this license: Splunk Enterprise Term License - No Enforcement 6.5 I am an administrator, when I try to create a new alert, I get "server error", also, when I check the splunkd log, I see the following:     11-17-2023 11:03:02.381 +0000 ERROR AdminManager - Argument "app" is not supported by this handler.     I investigated all of this after seeing these warnings in the scheduler.log:     11-17-2023 07:35:00.513 +0000 WARN SavedSplunker - Savedsearch scheduling cannot be inherited from another user's search. Schedule ignored for savedsearch_id="nobody;search;Proxy NGINX Errors Alert" 11-17-2023 07:35:00.513 +0000 WARN SavedSplunker - Savedsearch scheduling cannot be inherited from another user's search. Schedule ignored for savedsearch_id="nobody;search;Proxy issue" 11-17-2023 07:35:00.513 +0000 WARN SavedSplunker - Savedsearch scheduling cannot be inherited from another user's search. Schedule ignored for savedsearch_id="nobody;search;Failed linux logins Clone8"     I also saw the license manager, sometimes we are exceding the quota, but as far as i investigated, this doesnt remove the alerting functionalities...
I am trying to output two rows of data with them being "read" and "write" with both of them having min,max, and avg of some values. Currently I am only able to display one row and I don't know Splunk... See more...
I am trying to output two rows of data with them being "read" and "write" with both of them having min,max, and avg of some values. Currently I am only able to display one row and I don't know Splunk well enough to use the other set of spath variables to display the other row.  This is my search and output. index="collectd_test" plugin=disk type=disk_octets plugin_instance=dm-0 | spath output=values0 path=values{0} | spath output=values1 path=values{1} | spath output=dsnames0 path=dsnames{0} | spath output=dsnames1 path=dsnames{1} | stats min(values0) as min max(values0) as max avg(values0) as avg by dsnames0 | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)            
Device_ID : 1 A.txt 2021-07-06 23:30:34.2379| Started! 2021-07-06 23:30:34.6808|3333|-0.051|0.051|0.008|0.016 Device_ID : 1 E.txt 2021-07-13 18:28:26.7769|** 2021-07-13 18:28:27.1363|aa Dev... See more...
Device_ID : 1 A.txt 2021-07-06 23:30:34.2379| Started! 2021-07-06 23:30:34.6808|3333|-0.051|0.051|0.008|0.016 Device_ID : 1 E.txt 2021-07-13 18:28:26.7769|** 2021-07-13 18:28:27.1363|aa Device_ID : 2 E.txt 2016-03-02 13:56:06.9283|** 2016-03-02 13:56:07.3333|ff Device_ID : 2 A.txt 2020-03-02 13:42:30.0111| Started! 2020-03-02 13:42:30.0111|444|-0.051|0.051|0.008|0.016 Query: index="xx" source="*A.txt" | eval Device_ID=mvindex(split(source,"/"),5) | reverse | table Device_ID _raw | rex field=_raw "(?<timestamp>[^|]+)\|(?<Probe_ID>[^|]+)" | table Device_ID timestamp Probe_ID | rex mode=sed field=timestamp "s/\\\\x00/ /g" | table Device_ID timestamp Probe_ID | eval time=strptime(timestamp,"%F %T.%4N") | streamstats global=f max(time) as latest_time by Device_ID | where time >= latest_time | eval _time=strptime(timestamp,"%Y-%m-%d %H:%M:%S.%4N") | table Device_ID _time Probe_ID |join type=left Device_ID [ search index="xx" source="*E.txt" | eval Device_ID=mvindex(split(source,"/"),5) | reverse | rex field=_raw "(?<timestamp>[^|]+)" | stats first(timestamp) as earliesttime last(timestamp) as latesttime by Device_ID |table Device_ID earliesttime latesttime ] |where _time >= strptime(earliesttime, "%Y-%m-%d %H:%M:%S.%4N") AND _time <= strptime(latesttime, "%Y-%m-%d %H:%M:%S.%4N") |search Device_ID="1"   Filtering events based on E.txt earliest timestamp on A.txt. It is working for Device_ID 1 and not for Device_ID 2. Both logs are same format. It is not generating earliest and latest timestamp for device_ID 2. If i run subsearch alone, it is generating.
TC Execution Summary for Last Quarter No. of job runs AUS JER IND ASI August 150 121 110 200 Sept 200 140 150 220 Oct 100 160 130 420 I want to write a query fo... See more...
TC Execution Summary for Last Quarter No. of job runs AUS JER IND ASI August 150 121 110 200 Sept 200 140 150 220 Oct 100 160 130 420 I want to write a query for the above table