All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am configuring the Cisco AMP for Endpoints input on our IDM instance.  When creating the input I am not able to specify the desired index for the data to go into.  My only options are main, summary... See more...
I am configuring the Cisco AMP for Endpoints input on our IDM instance.  When creating the input I am not able to specify the desired index for the data to go into.  My only options are main, summary, and history.  How do I specify my index?
Hey Splunksters, My work environment is switching from Windows (large distributed enviro) to Linux pretty soon. I'd like to get familiar with architecting in Linux so I had a couple of question... See more...
Hey Splunksters, My work environment is switching from Windows (large distributed enviro) to Linux pretty soon. I'd like to get familiar with architecting in Linux so I had a couple of questions: I'm wondering if I can simply spin up 3-5 aws linux vm's and use the free version of splunk to get familiar with the process of creating a distributed enviro (assigning a search head, 1 idx, maybe couple of forwarders and a deployment server using the free splunk???  Or, is the free splunk Enterprise only good for 1 download on 1 machine ?? Thanks!
I'm at my wits end here, everything seems to indicate what I'm doing should work, yet it's not.  I have Azure firewall logs feeding in through a storage account using the Microsoft Cloud Services ... See more...
I'm at my wits end here, everything seems to indicate what I'm doing should work, yet it's not.  I have Azure firewall logs feeding in through a storage account using the Microsoft Cloud Services app. These come in as standard JSON, which is being extracted fine by Splunk. There is a nested field in the JSON, "properties.msg", that has the actual firewall log message including source/destination information, IPs/ports, whether it was allowed/denied, and what firewall rule was referenced.  For reference, this thread discusses a nearly similar case/problem -https://community.splunk.com/t5/Splunk-Search/Azure-Firewall-Log-Field-Extraction-Help/m-p/411148  The added wrinkle I have is that I am trying to get the fields extracted to work with CIM data models, not just get  the extractions as results from a search. This honestly seemed easy enough, but for some reason none of my field extractions are working. Here are some facts/things I have tried This is in Splunk Cloud I created a regex to extract all the fields from the properties.msg to named capture groups The regex shows correct in Regex101 The regex extracts all the fields if used in the 'rex' command in search Using the regex inside the Field Extractor tool and checking with preview function shows the fields extracted I've saved the extraction as being shared Globally, Private, and App only (even tried different apps other than search) I've tried saving as a inline extraction, and as a transform applying to both the _raw and individual properties.msg as SOURCE_KEY I'm not seeing any errors or warnings when trying to do any of these changes that would make me thing something was wrong None of this seems to work, none of the fields are extracted. I tried doing a field alias for 'properties.msg' to 'msg', and that worked so it's not like its  (but didn't help me because I still can't extract the data from within that message.) I honestly don't get how I can see the regex working in the Field Extractor, hit 'Save', see it saved in the configurations, but not extract fields. EDIT: Sample _raw log (more in updated link posted above) { "category": "AzureFirewallApplicationRule", "time": "2021-05-04T15:41:59.8967610Z", "resourceId": "/SUBSCRIPTIONS/REDACTED/RESOURCEGROUPS/REDACTED/PROVIDERS/MICROSOFT.NETWORK/AZUREFIREWALLS/SOMEFW", "operationName": "AzureFirewallApplicationRuleLog", "properties": {"msg":"HTTPS request from 192.168.0.1:8888 to subdomain.x99.blob.storage.azure.net:443. Action: Allow. Rule Collection: AllowOutbound. Rule: AllowOutbound-AA-AA-A"}} { "category": "AzureFirewallApplicationRule", "time": "2021-05-04T15:41:58.6369780Z", "resourceId": "/SUBSCRIPTIONS/REDACTED/RESOURCEGROUPS/REDACTED/PROVIDERS/MICROSOFT.NETWORK/AZUREFIREWALLS/SOMEFW", "operationName": "AzureFirewallApplicationRuleLog", "properties": {"msg":"HTTPS request from 192.168.0.1:8888 to subdomain.x99.blob.storage.azure.net:443. Action: Allow. Rule Collection: AllowOutbound. Rule: AllowOutbound-AA-AA-A"}} { "category": "AzureFirewallNetworkRule", "time": "2021-05-07T15:05:59.8277330Z", "resourceId": "/SUBSCRIPTIONS/REDACTED/RESOURCEGROUPS/REDACTED/PROVIDERS/MICROSOFT.NETWORK/AZUREFIREWALLS/SOMEFW", "operationName": "AzureFirewallNetworkRuleLog", "properties": {"msg":"TCP request from 192.168.0.1:8888 to 8.8.8.8:8888. Action: Deny. "}} Regex \"(?<protocol>\w+)\s[rR]equest\D+(?<src>[^\:]+)\:(?<src_port>\d+) to (?<dest>[^\:]+)\:((?<dest_port>\d+))?\.\sAction\: (?<action>\w+)\.(?: Rule Collection\: (?<cat>\w+)\. Rule\: (?<rule>[^\"]+))?
We are using the Phantom Add-on to forward events from our ES SH to our Phantom instance, but after upgrading to v. 4.035 we have started getting duplicates of all events in Phantom. The configura... See more...
We are using the Phantom Add-on to forward events from our ES SH to our Phantom instance, but after upgrading to v. 4.035 we have started getting duplicates of all events in Phantom. The configuration was created in the Phantom Add-on UI, and is set to run every 10 minutes.  
Hello, We were using ITSI  4.3.1. as it had some issues, we decided to uninstall & freshly install 4.7.2.  We took backup of current configuration using "create backup job"  on ITSI GUI. And, I ver... See more...
Hello, We were using ITSI  4.3.1. as it had some issues, we decided to uninstall & freshly install 4.7.2.  We took backup of current configuration using "create backup job"  on ITSI GUI. And, I verified that, we had backup jobs stored on /var/itsi/backups directory on respective search head server. However, after installing 4.7.2 i can't see any backup jobs available on respective directory. it got overwritten by 4.7.2 backups as below. [root@server backups]# pwd /opt/splunk/var/itsi/backups [root@server backups]# ls ItsiDefaultScheduledBackup-1620340301.zip ItsiDefaultScheduledBackup-1620343842.zip   Is there any way to retrieve full & partial backups which we took on earlier version (4.3.2) as we had all our services, KPI's there and we don't have any other backups taken for same ? Thanks in advance for your support. 
None of the solutions on here work. I tried running as an admin but still same error. I could install it on a different laptop without any problems. What can I compare between these 2 machines? Also,... See more...
None of the solutions on here work. I tried running as an admin but still same error. I could install it on a different laptop without any problems. What can I compare between these 2 machines? Also, on the machine it is not installing, in the programfiles\Splunk\etc\system\local there are 2 CONF files(authentication and user- seed). Any suggestions?
For the sake of DR or Backup, How do I backup or Recover say, my Splunk Deployment Server or Cluster Master please? If I may also ask what about SHs, Indexers and so on please. Thank u in advance.
I am new to splunk. Please help me with the below content. I need to check first and last events of particular transaction and alert should be triggered if the sequence is not followed or any proces... See more...
I am new to splunk. Please help me with the below content. I need to check first and last events of particular transaction and alert should be triggered if the sequence is not followed or any process stopped in middle. How can i do that ? Can anyone please help me on the same? Thanks in Advance
Running into a strange issue where after upgrading my Heavy Forwarder to 8.1.3 and subsequently the Microsoft Cloud Services app to a supported version(4.1.2 and have tried 4.1.1) it is no longer col... See more...
Running into a strange issue where after upgrading my Heavy Forwarder to 8.1.3 and subsequently the Microsoft Cloud Services app to a supported version(4.1.2 and have tried 4.1.1) it is no longer collecting logs. Giving a subscription id cannot be found error. I've re-setup the account and the authentication works fine(I've tested by changing a number or letter in the tenant ID and threw a error) in the configuration tab.   I've tried using both the Object ID and the Application ID and it gives these errors:   mscs_api_error.APIError: "status=404, error_code=SubscriptionNotFound, error_msg=The subscription '7f3c89aa-1767-429f-ade2-8f38d67426b3' could not be found." mscs_api_error.APIError: "status=404, error_code=SubscriptionNotFound, error_msg=The subscription 'cbe4be6e-142f-4c52-acaf-92671bd8c6e4' could not be found."   Any ideas?  
I have configured collectd but not getting the correct data in splunk. some of the package like system architecture,version,os details are reported but not metrics data.       # systemctl status ... See more...
I have configured collectd but not getting the correct data in splunk. some of the package like system architecture,version,os details are reported but not metrics data.       # systemctl status collectd ● collectd.service - Collectd statistics daemon Loaded: loaded (/usr/lib/systemd/system/collectd.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2021-05-07 09:26:56 EDT; 1min 6s ago Docs: man:collectd(1) man:collectd.conf(5) Main PID: 10493 (collectd) Tasks: 11 Memory: 1.3M CGroup: /system.slice/collectd.service └─10493 /usr/sbin/collectd May 07 09:27:56 abc109 collectd[10493]: Available write targets: [none] May 07 09:27:56 abc109 collectd[10493]: Available write targets: [none] May 07 09:27:56 abc109 collectd[10493]: Available write targets: [none] May 07 09:27:56 abc109 collectd[10493]: Available write targets: [none] May 07 09:27:56 abc109 collectd[10493]: Available write targets: [none] May 07 09:27:56 abc109 collectd[10493]: Available write targets: [none] May 07 09:27:56 abc109 collectd[10493]: Available write targets: [none] May 07 09:27:56 abc109 collectd[10493]: Available write targets: [none] May 07 09:27:56 abc109 collectd[10493]: Available write targets: [none] May 07 09:27:56 abc109 collectd[10493]: Available write targets: [none]      
Hi Team,   I am trying to use iframe to load tableau dashboard , until splunk 7.x it was working fine, but after upgrade to splunk 8.0.6, it's not getting loaded. I could see answers in forum and ... See more...
Hi Team,   I am trying to use iframe to load tableau dashboard , until splunk 7.x it was working fine, but after upgrade to splunk 8.0.6, it's not getting loaded. I could see answers in forum and I did below changes but still did not work, any other change I need to do? Web.conf x_frame_options_sameorigin = false  replyHeader.Content-Security-Policy= frame-ancestors-self  Server.conf x_frame_options_sameorigin = false        
Hi All, - How to setup license restriction over BRUM. Eg: I need to setup restriction that only 50,000 page views should be used and in case 50,000 are used, no more page views should be allowed. ... See more...
Hi All, - How to setup license restriction over BRUM. Eg: I need to setup restriction that only 50,000 page views should be used and in case 50,000 are used, no more page views should be allowed. Also, post usage of 50,000 unit, if we can allow few most critical transaction to use license. -  Also, I need to understand, what are the best practices that should be followed to setup the BRUM for optimal use of licensing. Moreover, certain use cases, (yes requirement differs based on the requirement of organizations) which are suggested for setting up BRUM. Regards
Hi everyone, I have a  question for following sample events. I am trying to group by job and provide two things current status of job (i.e starting/running/success/failure) and total duration time ... See more...
Hi everyone, I have a  question for following sample events. I am trying to group by job and provide two things current status of job (i.e starting/running/success/failure) and total duration time for job i.e duration between starting, running and success phases of job that might be run multiple times in a day. JOBNAME STATUS TOTALDURATION JOB1 RUNNING 00:30:10(ran for the day) JOB2 SUCCESS 01:20:10(ran for the day) SAMPLE EVENTS: [04/07/2021 22:16:01]. EVENT: CHANGESTATUS STATUS: SUCCESS JOB: JOB1 [04/07/2021 22:15:01]. EVENT: CHANGESTATUS STATUS: RUNNING JOB: JOB1 [04/07/2021 22:15:00]. EVENT: CHANGESTATUS STATUS: STARTING JOB: JOB1 [04/07/2021 22:11:01]. EVENT: CHANGESTATUS STATUS: SUCCESS JOB: JOB1 [04/07/2021 22:10:08]. EVENT: CHANGESTATUS STATUS: SUCCESS JOB: JOB2 [04/07/2021 22:10:01]. EVENT: CHANGESTATUS STATUS: RUNNING JOB: JOB2 [04/07/2021 22:10:01]. EVENT: CHANGESTATUS STATUS: RUNNING JOB: JOB1 [04/07/2021 22:10:00]. EVENT: CHANGESTATUS STATUS: STARTING JOB: JOB1 [04/07/2021 22:10:00]. EVENT: CHANGESTATUS STATUS: STARTING JOB: JOB2 Query: index=.........|rex " query captures status , jobname and timestamp in format HH:MM:SS"|transaction jobname startswith=(status="STARTING") endswith=(status="SUCCESS")|stats first(status) as jobcurrentstatus , sum(duration) as totaldur by jobname| eval totalduration(HH:MM:SS)=tostring(totaldur,"duration") My current query works for TOTALDURATION but doesn't give accurate result for my current job status.Is there a way i can get correct current status?
We totally have 150+ account that we are currently wanted to create an asset DB to integrate with threat intel.   please suggest multiple approaches that you come across to build one in splunk for ... See more...
We totally have 150+ account that we are currently wanted to create an asset DB to integrate with threat intel.   please suggest multiple approaches that you come across to build one in splunk for AWS assets  
Hello everyone. Tell me there is such an stanza - [admon://] in addon Splunk_TA_Windows for monitoring AD. I activated it and rolled into all controllers. Began to flow various events of changes t... See more...
Hello everyone. Tell me there is such an stanza - [admon://] in addon Splunk_TA_Windows for monitoring AD. I activated it and rolled into all controllers. Began to flow various events of changes to AD. But I do not see events when the user is added to some group or remove from it. Earlier was another addon -  https://docs.splunk.com/Documentation/DCADAddon/1.0.1/DCADAddon/Configuretheadd-ons But since 2019, it is not supported and the entire functionality was transferred to SPLUNK Add-on for Windows. How to setup stanza  Splunk_TA_Windows to monitor and change in groups in AD ?
I have a search result where each 3  follwing lines are a block I want to join to one row like: fld1 fld2 fld3 fld4 A               B                   B      C          D               C E     ... See more...
I have a search result where each 3  follwing lines are a block I want to join to one row like: fld1 fld2 fld3 fld4 A               B                   B      C          D               C E               F                  F        G          H                G   as a result of the join I want to have: fld1 fld2 fld3 fld4 A      D      B      C E      H      F       G   I have tried with the following search, which works partially: | makeresults | eval fld1="A", fld3="B" | append [makeresults | eval fld3="B", fld4="C" ] | append [makeresults | eval fld2="D", fld4="C" ] | append [makeresults | eval fld1="E", fld3="F" ] | append [makeresults | eval fld3="F", fld4="G" ] | append [makeresults | eval fld2="H", fld4="G" ] | table fld1 fld2 fld3 fld4 | outputcsv fldRows | fields - * | append [ | inputcsv fldRows | selfjoin fld3 ] | append [ | inputcsv fldRows | selfjoin fld4 ] | selfjoin fld4   There are two probems: when running for the first time there is no result. When modifying a field the first value of this field is returned There seems to be a problem that on th second and followng run outputcsv does not update fldRows   I am also curious if there is a simpler approach for getting the desired results Thanks for a response.  
Machine Learning Toolkit - Density Function Hello, I'm trying to use the machine learning tool in order to create a model based on a time frame and then analyze the current time window in order to ... See more...
Machine Learning Toolkit - Density Function Hello, I'm trying to use the machine learning tool in order to create a model based on a time frame and then analyze the current time window in order to see if it stands out from an average behavior fort his hour of the day on this day of the week. When trying to build the model with a command like this:   index=onlineservices "USERACTION: Successful login" earliest=-33d@d latest=-1d@d | bin _time span=15m | eval date_minutebin=strftime(_time, "%M") | eval date_hour=strftime(_time, "%H") | eval date_wday=strftime(_time, "%A") | stats count by _time date_minutebin date_hour date_wday  | fit DensityFunction count by "date_minutebin,date_hour,date_wday" into mydensitymodel threshold=0.5 dist=norm sample=True   I get the following error: Error in 'fit' command: Error while fitting "DensityFunction" model: 'module' object has no attribute 'wasserstein_distance'   I have also tryed to change the metrics to kolmogorov_smirnov, but I get another error in this case: Error in 'fit' command: Error while fitting "DensityFunction" model: 'functools.partial' object has no attribute '__module__'   Can someone help me with this? Why the default metrics does not work in my case?   Bellow you ca find an example of the data I get from the first search, before piping into FIT function:   _time    date_minutebin              date_hour          date_wday         count 2021-04-29 00:00:00       00           00           Thursday             4 2021-04-29 00:15:00       15           00           Thursday             3 2021-04-29 00:30:00       30           00           Thursday             2 2021-04-29 00:45:00       45           00           Thursday             3 2021-04-29 01:00:00       00           01           Thursday             2 2021-04-29 01:15:00       15           01           Thursday             1 2021-04-29 01:45:00       45           01           Thursday             2 2021-04-29 02:00:00       00           02           Thursday             3 2021-04-29 02:15:00       15           02           Thursday             1 2021-04-29 02:30:00       30           02           Thursday             2 2021-04-29 02:45:00       45           02           Thursday             1 2021-04-29 03:00:00       00           03           Thursday             1 2021-04-29 03:15:00       15           03           Thursday             2 2021-04-29 03:30:00       30           03           Thursday             1 2021-04-29 03:45:00       45           03           Thursday             2 2021-04-29 04:00:00       00           04           Thursday             1 2021-04-29 04:15:00       15           04           Thursday             2 2021-04-29 04:30:00       30           04           Thursday             1
I'm trying to parse the below sample using Delimiters, could anyone help with the extraction. Delimiters doesn't work for this. Can someone help with Regex commands.   [2021-05-07T20:54:50.6222+10:... See more...
I'm trying to parse the below sample using Delimiters, could anyone help with the extraction. Delimiters doesn't work for this. Can someone help with Regex commands.   [2021-05-07T20:54:50.6222+10:00] [BDF] [ERROR:32] [BD99999] [security2] [client_id: 10.10.18.236] [host_id: google.com ] [host_addr: 10.10.05.11] [pid: 5397] [tid: 139783720359680] [user: apaapp] [ecid: 005kRh1ly^x8dpK_yTk3yW0001K80002jb] [rid: 0] [VirtualHost: google:4445] [client 0.10.18.236] ModSecurity: Warning. Pattern match "^[\\\\d.:]+$" at REQUEST_HEADERS:Host. [file "/apps/vbgrt/bdf/Google/Middleware/user_projects/domains/bdf_domain/config/fmwconfig/components/BDF/instances/bcp/crs-rules/REQUEST-920-PROTOCOL-ENFORCEMENT.conf"] [line "735"] [id "920350"] [msg "Host header is a numeric IP address"] [data"10.10.05.11:4445"] [severity "WARNING"] [ver "OWASP_PQR/3.3.0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-protocol"] [tag "paranoia-level/1"] [tag "OWASP_PQR"] [tag "capec/1000/210/272"] [tag "PCI/6.5.10"] [hostname "google"] [uri "/"] [unique_id "HTjues090uwmX0Cz1kLVwAAAIw"]
I have installed SUF 7.3.4 on UNIX(Solaris 10) Server and when I run splunk list guid or splunk list monitor I am getting "Splunk username". I have a user "splunkma" configured that I use to stop / ... See more...
I have installed SUF 7.3.4 on UNIX(Solaris 10) Server and when I run splunk list guid or splunk list monitor I am getting "Splunk username". I have a user "splunkma" configured that I use to stop / start splunkd process. Please advice. Thanks. RB  
Hello ! My data is in this form  : _time (dd/mm/yyyy), NbRisk, SubProject, GlobalProject 02/05/2021, 10 ,  SubProject1, Project 1 01/05/2021, 4 ,  SubProject2 ,  Project 1 01/05/2021, 5 ,  S... See more...
Hello ! My data is in this form  : _time (dd/mm/yyyy), NbRisk, SubProject, GlobalProject 02/05/2021, 10 ,  SubProject1, Project 1 01/05/2021, 4 ,  SubProject2 ,  Project 1 01/05/2021, 5 ,  SubProject3 ,  Project 1 10/04/2021, 80 ,  SubProject1 ,  Project 1 09/04/2021, 32 ,  SubProject2 ,  Project 1 09/04/2021, 12 ,  SubProject3 ,  Project 1 03/05/2021, 11 ,  SubProject1, Project 2 01/05/2021, 40 ,  SubProject2 ,  Project 2 15/04/2021, 60 ,  SubProject3 ,  Project 2 09/04/2021, 2 ,  SubProject3 ,  Project 2 08/04/2021, 4 ,  SubProject2 ,  Project 2 05/04/2021, 3 ,  SubProject1 ,  Project 2 I would like to calculate the last "NbRisk"  by "SubProject"  and sum it by all day and by "GlobalProject" with timechart to see the trend. For example : 07/05/2021, Project 1, 19 07/05/2021, Project 2, 111 06/05/2021, Project 1, 19 06/05/2021, Project 2, 111 | | 01/05/2021,Project 1, 89 01/05/2021,Project 2, 103 | | 10/04/2021, Project 1, 124 10/04/2021, Project 2, 9 I have tried a lot of thing like streamstats/addtotals without success .. Thanks for ur help !