All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In source code we give a link when we hit on it, it will take us to the url we have given. My question is if that website asks for credentials can we bypass the credentials by storing them in source... See more...
In source code we give a link when we hit on it, it will take us to the url we have given. My question is if that website asks for credentials can we bypass the credentials by storing them in source code if yes please give me the suggestions  
Is it possible to add Puerto Rico to the | geom geo_us_states featureIdField=STATE lookup? I have not been able to find any apps or even anyone asking this question before.
I'm trying to eventually utilize the builtin GEOSTATS map to populate a simple map showing the number of IP addresses that hit my firewall from a specific country over a period of time.  Problem is, ... See more...
I'm trying to eventually utilize the builtin GEOSTATS map to populate a simple map showing the number of IP addresses that hit my firewall from a specific country over a period of time.  Problem is, I cannot get anything to work related to GEOSTATS.  Hopefully, I can lay this out in a simple manor..... Datamodel = test    Extracted Fields: Client_IP (field within the log that the Originating IP address is extracted)                                       test_IP (field alias points to this field, set as IPv4, is the "IP" field utilized within the GEO IP settings)   What works:  1. Datamodel "test": Acceleration is on, status 100% complete,  and tstats commands can be used against this datamodel that produce the expected results 2. If I go into datamodel "test", under the GEO IP settings, select "Preview"....It populates with Lat, Long, & Country information 3. | datamodel test search | table Client_IP, test_IP, test_lat, test_lon, test_Country.       - this query produces lat, long, and country results. 3. | tstats count AS Unique_IP FROM datamodel="test" BY test.test_IP test.test_Country - this query produces exactly what I would expect to see....The "test_IP" field with IP addresses, "Unique_IP" field with the count of records per IP address, and "test_Country" showing the country the IP address originates from   The Problem: once I add a pipe "|" things stop working.  Example: 1. | tstats count AS Unique_IP FROM datamodel="test" BY test.test_IP | table test.test_IP test.test_Country Unique_IP - only shows the "Unique_IP" field and the results of that field and the IP address in the "test.test_IP" field 2. | tstats count AS Unique_IP FROM datamodel="test" BY test.test_IP | geostats latfield=test.lat longfield=lon globallimit=0 - this produces no "Statistics" and no "Visualization"   I greatly appreciate your time and thank you for your help with this!!
In splunk I have fully qualified sources and destinations. Example: src=host1.mydomain.com When I table it out I just want it to show host1 without .mydomain.com How do I do this?
Is there a way if I do a search for a username (ex. first_initial.lastname) under a specific index, that i can get a table of all the fields within that index that have the value for that specific us... See more...
Is there a way if I do a search for a username (ex. first_initial.lastname) under a specific index, that i can get a table of all the fields within that index that have the value for that specific username?
Hello Splunkers, I need help with change sourcetype in logs. There is UF installed on Win server. I would like to collect Windows log, so Splunk add-on for Windows is installed on UF. There is no c... See more...
Hello Splunkers, I need help with change sourcetype in logs. There is UF installed on Win server. I would like to collect Windows log, so Splunk add-on for Windows is installed on UF. There is no config change in add-on itself, only I made separate app for collectin Application log wit simple input.conf file: # Windows platform specific input processor. [WinEventLog://Application] index = windows_app disabled = 0 renderXml=false Log is going from UF to HF. On HF I would like to change sourcetype for part of Win log, namely for Citrix FAS log. So I made app on HF with this content: props.conf [WinEventLog] TRANSFORMS-win_citrix_fas_sourcetype = citrix_fas_sourcetype transforms.conf [citrix_fas_sourcetype] REGEX = SourceName=Citrix\.Authentication\.FederatedAuthenticationService DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::citrix_fas Splunk add-on for Windows is installed on HF as well. Problem is log is indexed with sourcetype=WinEventLog, so my app on HF is manifestly ineffective. Of course, my app has Global permissions and is enabled. And REGEX in transforms.conf should be OK. Could you someone help me point out what is wrong? AFAIK it should be working... Thank in advance for help. Regards Lukas 
Hi, We were doing some testing in a lab environment, with a s2 idx cluster.   Apparently a lab user shutdown/rebooted the wrong nodes, resulting in the cluster master and one or more of the 3 cluste... See more...
Hi, We were doing some testing in a lab environment, with a s2 idx cluster.   Apparently a lab user shutdown/rebooted the wrong nodes, resulting in the cluster master and one or more of the 3 cluster members were shutdown by accident in the lab. After starting the idx cluster boxes and restarting splunk, we have noticed some discrepancies in the data we are searching, e.g. we are not seeing all the data we used to see.  There appear to be gaps. We believe the cache manager was corrupted as the data seems to be complete in the s2 (s3 bucket) but not when searching. We are looking the proper way to reconnect to the existing smartstore (s3 bucket path) and reset the cache manager to re-register all the contents again. Unfortunately I am not finding the documentation, which I thought I read months ago... Related to this, one of our use-cases was to test connecting a standalone indexer to the same s2 to retrieve previous years of logs for forensic reasons, without impacting the primary s2 idx cluster. Thank you in advance 
I am receiving an error of "The expression is malformed. Expected IN." any time we search utilizing the web data model. When i remove this eval expression 'if(act="File quarantined","blocked",action)... See more...
I am receiving an error of "The expression is malformed. Expected IN." any time we search utilizing the web data model. When i remove this eval expression 'if(act="File quarantined","blocked",action)' the search works fine so I am assuming that this is the problem child. does anyone see anything inherently wrong with this expression?
Hi, We are setting up a very small network: - 25 desktops -15 servers (Windows and Linux) - 1 NAS - 4 network devices The network will grow by 10% each year. We are looking for Splunk Ent on pr... See more...
Hi, We are setting up a very small network: - 25 desktops -15 servers (Windows and Linux) - 1 NAS - 4 network devices The network will grow by 10% each year. We are looking for Splunk Ent on premise but we are hesitating between the pricing options (Infrastructure-based Pricing, Predictive Pricing Program, Ingest Pricing, Rapid Adoption Packages). Is that possible to have a recommandation on the best pricing option that will satisfy our needs. We know that free Splunk will do fine but we want support. Merci!!
Hello Team , i try to pass value of time token in dbxquery to update current time , it not working. Without it is working fine     <form hideSplunkBar="true" hideEdit="false" script="submit_date.j... See more...
Hello Team , i try to pass value of time token in dbxquery to update current time , it not working. Without it is working fine     <form hideSplunkBar="true" hideEdit="false" script="submit_date.js"> <label>Cloud Studio - Migration Scheduler testing</label> <init> <set token="tokFromDate"></set> <set token="tokToDate"></set> <set token="defaut_time">$result.today$</set> </init> <search> <query>| makeresults|eval today=strftime(_time,"%Y-%m-%dT%H:%M")|fields - _time</query> <done> <set token="defaut_time">$result.today$</set> </done> </search> <search> <query>|dbxquery connection="CloudAssessment2" query="UPDATE [Cloudstudio2].[dbo].integrated_assessment_mgl SET [Cloudstudio2].[dbo].integrated_assessment_mgl.pl_start_date='$tokFromDate$', [Cloudstudio2].[dbo].integrated_assessment_mgl.pl_end_date='$tokToDate$', [Cloudstudio2].[dbo].integrated_assessment_mgl.pl_current_date='$result.today$' WHERE [Cloudstudio2].[dbo].integrated_assessment_mgl.app_group=$tok_Application_Group$ AND [Cloudstudio2].[dbo].integrated_assessment_mgl.waves=$Waves$; UPDATE [Cloudstudio2].[dbo].integrated_assessment_mgl SET [Cloudstudio2].[dbo].integrated_assessment_mgl.pl_start_date = REPLACE(pl_start_date,'T',' '); UPDATE [Cloudstudio2].[dbo].integrated_assessment_mgl SET [Cloudstudio2].[dbo].integrated_assessment_mgl.pl_end_date = REPLACE(pl_end_date,'T',' '); UPDATE [Cloudstudio2].[dbo].integrated_assessment_mgl SET [Cloudstudio2].[dbo].integrated_assessment_mgl.pl_current_date = REPLACE(pl_current_date,'T',' '); " | dbxoutput output="integrated_assessment_mgl_date"; </query> <earliest>0</earliest> <latest></latest>
Hi guys,  we have an issue when I am trying to start the free trial access to Splunk Cloud. The message is "We're sorry, an internal error was detected when creating the stack. Please try again lat... See more...
Hi guys,  we have an issue when I am trying to start the free trial access to Splunk Cloud. The message is "We're sorry, an internal error was detected when creating the stack. Please try again later." Can you have a look or give me another way to start it? We are not allowed to create cases, since we do not have an entitlement number, yet... Thank you in advance ! Best regards,  Christian
We are developing a custom search command to create events, this is using a streaming command with version 2 of the protocol, as the source is quite slow we'd like to send smaller chunks of results b... See more...
We are developing a custom search command to create events, this is using a streaming command with version 2 of the protocol, as the source is quite slow we'd like to send smaller chunks of results back to Splunk than the default 50,000, e.g. chunks of 1,000 events, so that users can view the partial results sooner. We've tried various approaches including an incrementing integar and calling self.flush() when it is divisable by 1,000, but that caused a buffer full error. Any suggestions would be really appreciated     ... @Configuration(type='streaming') class OurSearchCommand(GeneratingCommand): ... for item in OurGenerator(): item['_time'] = item['timestamp'] yield item    
Hi,  @493669  @MuS  @dturnbull_splun  @bowesmana  Anyone please help me in replacing join in the below query?? " index=167515-np sourcetype=hardware | fields deviceId, productType, productId, phy... See more...
Hi,  @493669  @MuS  @dturnbull_splun  @bowesmana  Anyone please help me in replacing join in the below query?? " index=167515-np sourcetype=hardware | fields deviceId, productType, productId, physicalType | search physicalType=Chassis | dedup deviceId | join deviceId [ search index=167515-np [| `last_np_sourcetype( "index=167515-np", "group_members")` ] groupId=288348 | fields deviceId ] | stats dc(productId) as PIDs by productType | search productType=Routers | table PIDs" Thanks
Good morning everyone,  I have a source type that is showing the event time as 5 hours prior to indextime. I have tried adding the TZ stanza to the TA as we are current in the America\New_York TZ an... See more...
Good morning everyone,  I have a source type that is showing the event time as 5 hours prior to indextime. I have tried adding the TZ stanza to the TA as we are current in the America\New_York TZ and after a restart the issue is still occuring.  This is a syslog input where Splunk has a monitor input configured and the data is being ingested from there. I am at a loss as to what else to try or look at since I haven't had any luck yet.  The TA is pushed from a DS to the search and the props.conf has been updated from that point.  Thank you any help in advanced. Search for the below information was found from this link:  https://community.splunk.com/t5/Getting-Data-In/Incorrect-event-time-in-Splunk/m-p/136662    _time delay indextime date_zone host source sourcetype _raw 2020-12-18 01:56:19 18001 12/18/2020 06:56:20 0 1.1.1.1 /var/log/syslog-ng/fireeye_hx/1.1.1.1/1.1.1.1_2020-12-18.log hx_cef_syslog 2020-12-18T06:56:19+00:00 1.1.1.1 cef[18505]: CEF:0|fireeye|hx|5.0.2|Malware Hit Found|Malware Hit Found|10|rt=Dec 18 2020 11:56:19 UTC dvchost=xxxx deviceExternalId=xxxx categoryDeviceGroup=/IDS categoryDeviceType=Malware Protection categoryObject=/Host cs1Label=Host Agent Cert Hash cs1=hash dst=x.x.x.x dmac=xx-xx-xx-xx-xx-xx dhost=MAC1 dntdom=xyz deviceCustomDate1Label=Agent Last Audit deviceCustomDate1=Dec 18 2020 07:52:21 UTC cs2Label=FireEye Agent Version cs2=x.x.x cs5Label=Target GMT Offset cs5=-PT5H cs6Label=Target OS cs6=somemachine externalId=24807616 start=Dec 18 2020 11:56:00 UTC categoryOutcome=/Success categorySignificance=/Compromise categoryBehavior=/Found cs7Label=Resolution cs7=ALERT cs8Label=Alert Types cs8=malware cs12Label=Malware Category cs12=file-event act=Detection MAL Hit msg=Host xxxx Malware alert categoryTupleDescription=Malware Protection found a compromise indication. cs4Label=Process Name cs4=Process categoryTechnique=Malware cs13Label=Malware Engine cs13=AV  
Hello, we we have heavy forwarder that take too much Ram in cache. How to find which   Splunk /functionnality/saved_search/othercould be set in Ram cache and limit it ? (Around tens of gigabytes tak... See more...
Hello, we we have heavy forwarder that take too much Ram in cache. How to find which   Splunk /functionnality/saved_search/othercould be set in Ram cache and limit it ? (Around tens of gigabytes taken).   thank you
Hi, I have the below search: | tstats values(Authentication.src_ip) as src_ip values(Authentication.src_host) as src_host from datamodel=Authentication.Authentication where Authentication.user=* ... See more...
Hi, I have the below search: | tstats values(Authentication.src_ip) as src_ip values(Authentication.src_host) as src_host from datamodel=Authentication.Authentication where Authentication.user=* by Authentication.dest, Authentication.action, Authentication.user, Authentication.app | `drop_dm_object_name("Authentication")` | stats count(eval(action=="failure")) as failure, count(eval(action=="success")) as success, dc(dest) as dest_count, values(src_ip) as src_ip, values(src_host) as src_host The search will display values of certain fields from the Authentication data model. Two of the fields have multivalues stored in them - Authentication.src. and Authentication.src_host. Authentication.src_host is a calculated eval field of the datamodel that performs a dns lookup of the Authentication.src field (which is an IP address) The problem Im having is that the stats values command will display columns of the values of these two fields in alphanumeric order when I run the above search. Which basically means that the src value will not be displayed inline with its correct corresponding src_host value. From below example, src_host value for 10.1.1.1 is actually chost, but a_host is displayed alongside it as it is sorted alphanumerically. src             src_host 10.1.1.1   a_host 10.1.1.2   b_host 10.1.1.3   c_host Is there a way I can output the values of the src_host field with its corresponding correct value for the src field? Thanks!
Hello, I installed the Qualys Technology addon v. 1.8.4 and all works fine, but the knowledge base panel is partially populated. I see in dashboard the QID, SEVERITY, TIPE, TITLE, CATEGORY but the ... See more...
Hello, I installed the Qualys Technology addon v. 1.8.4 and all works fine, but the knowledge base panel is partially populated. I see in dashboard the QID, SEVERITY, TIPE, TITLE, CATEGORY but the CVSS fields are all empty. I enabled CVSS score on Qualys portal side as described in the Qualys documentation but nothing is changed. So i performed the API call with cURL from the server where I installed the TA and the web API give me back the CVSS scores correctly. So I think the problem resides in the kbpopulator.py, could you please help me?    
Hi all,   So I was wondering as I was writing some docs today and playing around creating some clusters... I was always taught and always read that you should not use the Deployment Server to creat... See more...
Hi all,   So I was wondering as I was writing some docs today and playing around creating some clusters... I was always taught and always read that you should not use the Deployment Server to create a Search Head Cluster as the /etc/apps gets wiped by the Deployer whenever the Search Heads turn into a cluster. That much I understand. That's why we always use CLI to initialise the SHs and then bootstrap the captain and attach to the Cluster Master. But, I was wondering as I was going through my Splunk Core Consultant notes, in one of the PPT slides I remember I saw a comment stating something like: /etc/apps would be wiped and you would have to deploy those configurations again in the /etc/shcluster/apps in the deployer.   So, what is the 'official' best practice on a "Professional Services Consultant level" around that Search Head clustering? I am using all the official splunk base apps already to install my clusters but when it comes to the SH Cluster I always go CLI..   Thank you for your time and answer
Hi Team, index=AA source=*XXX.log | rex field=_raw "- (?<uc>U(\d{7}|\d{8})) " | rex field=uc "(?<ul5>\d{5})$" | rex "[^\w](?<JOB>(?<env>[A-Z0-9@_#]+)\.[A-Z0-9@_#]+\.[A-Z0-9@_#]+\.(?<app>[A-Z0-9@_... See more...
Hi Team, index=AA source=*XXX.log | rex field=_raw "- (?<uc>U(\d{7}|\d{8})) " | rex field=uc "(?<ul5>\d{5})$" | rex "[^\w](?<JOB>(?<env>[A-Z0-9@_#]+)\.[A-Z0-9@_#]+\.[A-Z0-9@_#]+\.(?<app>[A-Z0-9@_#]+\.[A-Z0-9@_#]+)\.[A-Z0-9@_#]+)" | search env=* app=* JOB=*DEV.* ul5=*11007* | stats count as "Alert Count" by JOB | sort - "Alert Count" with abv search i can get count of jobs which has ul5=*11007*  for a given period of time example for 7 days i got below output from abv search  JOB Alert Count DEV.JOBS.Temp1 18 DEV.JOBS.Temp2 11 DEV.JOBS.Temp3 7 from abv i know DEV.JOBS.Temp1 has count 18, But this job has repeated only on 1 day not all days in 7 days How can i find count of a Job if it repeated only for multiple days  example  14-dec-2020 DEV.JOBS.Temp1 ul5=*11007* count 2 14-dec-2020 DEV.JOBS.Temp2 ul5=*11007* count 11 15-dec-2020 DEV.JOBS.Temp1 ul5=*11007* count 10 15-dec-2020 DEV.JOBS.Temp2 ul5=*11007* count 21 16-dec-2020 DEV.JOBS.Temp1 ul5=*11007* count 3 16-dec-2020 DEV.JOBS.Temp2 ul5=*11007* count 6 17-dec-2020 DEV.JOBS.Temp1 ul5=*11007* count 2 17-dec-2020 DEV.JOBS.Temp2 ul5=*11007* count 11 18-dec-2020 DEV.JOBS.Temp1 ul5=*11007* count 10 18-dec-2020 DEV.JOBS.Temp2 ul5=*11007* count 21 19-dec-2020 DEV.JOBS.Temp1 ul5=*11007* count 3 19-dec-2020 DEV.JOBS.Temp2 ul5=*11007* count 6 19-dec-2020 DEV.JOBS.Temp3 ul5=*11007* count 6 If i do search i should get out put as below  -- for the jobs it should show 5 why because its repeated for 5 days. JOB Alert Count DEV.JOBS.Temp1 5 DEV.JOBS.Temp2 5 DEV.JOBS.Temp3 1 Thanks
Hi, Here is one server running more than 4 to 5 applications. All applications are PHP and running with drupal cms. We want to track which application is accessible or not.