All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

There are two searches with CI_Name as the common field . I have output and want compare the two columns installed and Server_Installed_Package based on CI_Name as common , if both are common mark it... See more...
There are two searches with CI_Name as the common field . I have output and want compare the two columns installed and Server_Installed_Package based on CI_Name as common , if both are common mark it as "Completed" in another column. If there is no match mark it as Not completed. first search output:   CI_Name installed shouldBe match Server1 nss-3.44.0-7.el6_10 nss-3.44.0-13.el6_10   Server1 nss-devel-3.44.0-7.el6_10     nss-devel-3.44.0-13.el6_10   Server1 nss-sysinit-3.44.0-7.el6_10 nss-sysinit-3.44.0-13.el6_10     Second search output : CI_Name Server_Installed_Package Server1 libgdata-0.6.4-2.el6.x86_64 Server1 util-linux-ng-2.17.2-12.28.el6_9.2.x86_64 Server1 rt73usb-firmware-1.8-7.el6.noarch Server1 sssd-1.13.3-60.el6_10.2.x86_64  
Hi.. I have a query that finds the values of service_name and service_name_count by user,Account_name .. I need to search for service_name_count>5 which is flagged as |eval flag1=new, and I need t... See more...
Hi.. I have a query that finds the values of service_name and service_name_count by user,Account_name .. I need to search for service_name_count>5 which is flagged as |eval flag1=new, and I need to exclude the history from the search using | join type=left user,Account_name [base query... earliest=-15d and latest=-7d].. . which is flagged as |eval flag2=history I need to search only for events | search flag1="New" NOT flag2="History"  Apart from these I need to find 1. Total count of new services (count the new services from all the requests that have at least 5 new services 2. Count the number of requests where each request have at least 5 new services These needs to be grouped by account_name user Please help me with any suggestions to find the above my sample code here:   index=test_index sourcetype=test_sourcetype | fields Service_Name Service_Name_Count Account_Name Account_Domain user Source_IP index sourcetype | stats earliest(_time) as earliest latest(_time) as latest values(Service_Name) as Service_Name values(Service_Name_Count) as Service_Name_Count values(Account_Name) as Account_Name values(Account_Domain) as Account_Domain values(Source_IP) as Source_IP values(index) as orig_index values(sourcetype) as orig_sourcetype by user | convert ctime(earliest) ctime(latest) | search Service_Name_Count > 5 | eval flag1="New", flag2="n.a." | join type=left Account_Name Service_Name [ search index IN (test_index2,test_index3) sourcetype=test_sourcetype2 EventCode=1234 earliest=-15d latest=-7d | fields Account_Name user Account_Domain Service_Name src_ip | rename src_ip as Source_IP | eval flag2="History"] | search flag1="New" NOT flag2="History" | table earliest latest Account_Name user Account_Domain Service_Name_Count Service_Name Source_IP flag1 flag2   Thanks in advance!
Inde= x source=xtype | table _time bank | bucket span =1sec _time | stats count as pps by _time | timechart span=1 hr  max(pps)  as "maxpps"    _time.  Max pps  columns getting output but radical... See more...
Inde= x source=xtype | table _time bank | bucket span =1sec _time | stats count as pps by _time | timechart span=1 hr  max(pps)  as "maxpps"    _time.  Max pps  columns getting output but radical chart not navigated panel  
Show if field "subject" contains one or more camel case strings like: LuckyChance to Receive a FREE IpadPro! ClaimNow! I'm having a hard time creating a regex for this. Please help.   Thank... See more...
Show if field "subject" contains one or more camel case strings like: LuckyChance to Receive a FREE IpadPro! ClaimNow! I'm having a hard time creating a regex for this. Please help.   Thank you.  
| stats count by field1 field1 field2 field3 only show yesterday count,  how can I  show count1 for yesterday, count2 for 2-day ago, count3 for 3-day ago, shown as following field1   field2 field... See more...
| stats count by field1 field1 field2 field3 only show yesterday count,  how can I  show count1 for yesterday, count2 for 2-day ago, count3 for 3-day ago, shown as following field1   field2 field3 count1 count2 coun3
I have this current search: index=web | eval Year=strftime(_time,"%Y") | eval Month=date_month | eval success=if(status=200,1=1,0) | search status=200 OR status=403 | chart count by Month, stat... See more...
I have this current search: index=web | eval Year=strftime(_time,"%Y") | eval Month=date_month | eval success=if(status=200,1=1,0) | search status=200 OR status=403 | chart count by Month, status | eval orden = if(Month="january",1,if(Month="february",2,if(Month="march",3,if(Month="april",4,if(Month="may",5,if(Month="june",6,if(Month="july",7,if(Month="august",8,if(Month="september",9,if(Month="october",10,if(Month="november",11,12))))))))))) | sort orden | fields - orden This search shows a graph of the amount of status "200" and "403" separated by months, I'm trying to develop a percentage line of the amount of status 200 compared to the total, how do I do this? can you help me please!      
Hi ,  I have two servers with plugin details . I want to evaluate a column as Package_installed and Package_shouldbe based on the hostname in separate column . server2 has multiple packages I want ... See more...
Hi ,  I have two servers with plugin details . I want to evaluate a column as Package_installed and Package_shouldbe based on the hostname in separate column . server2 has multiple packages I want separate row and column for each package_shouldbe and package_installed and hostname field should be same . hostname Plugins server1 Plugin Output: Remote package installed : gnutls-3.6.16-5.el8_6 Should be                : gnutls-3.6.16-6.el8_7 NOTE: The vulnerability information above was derived by checking the package versions of the affected packages from this advisory. This scan is unable to rely on Red Hat's own security checks, which consider channels and products in their vulnerability determinations. server2 Plugin Output: Remote package installed : httpd-2.4.6-98.el7_9.6 Should be                : httpd-2.4.6-98.el7_9.7 Remote package installed : httpd-tools-2.4.6-98.el7_9.6 Should be                : httpd-tools-2.4.6-98.el7_9.7 Remote package installed : mod_session-2.4.6-98.el7_9.6 Should be                : mod_session-2.4.6-98.el7_9.7 NOTE: The vulnerability information above was derived by checking the package versions of the affected packages from this advisory. This scan is unable to rely on Red Hat's own security checks, which consider channels and products in their vulnerability determinations.
Hello, I have created a splunk app, very similar to the weather example here on github My app needs to be authenticated in order to access the 'service.storage_passwords' , however when running t... See more...
Hello, I have created a splunk app, very similar to the weather example here on github My app needs to be authenticated in order to access the 'service.storage_passwords' , however when running the command on my admin Splunk account   |test_command   'None' is being printed in my search.log file for the authenticated object.   This is my Python code -   #Various imports logger = logging.getLogger("MyCommand") logger.setLevel(logging.DEBUG) @Configuration() class MyCommand(GeneratingCommand): ip = Option(require=True) def generate(self): try: logger.debug("Starting MyCommand run") service = self.service # THIS IS NONE sesh_key = self._metadata.searchinfo.session_key) # THIS IS NONE logger.debug(service) #None logger.debug(sesh_key) #None # Dispatch your custom search command dispatch(MyCommand, sys.argv, sys.stdin, sys.stdout, __name__)     and my commands.conf (unsure if these options are correct)   [test_command] type = python filename = test_command.py supports_getinfo = true supports_rawargs = true passauth = true enableheader = true      I assume I am missing something fairly obvious regarding how to pass authentication into my app when a command is ran, however I cannot determine the issue.   Appreciate any help.
How to create a dashboard to show the activities of the users specially uploading files. Kindly  
Hi, I'm working with a large amount of data. I have a main report that extracts all data of the previous month and 5 additional small reports that filter by event type and take the only fields tha... See more...
Hi, I'm working with a large amount of data. I have a main report that extracts all data of the previous month and 5 additional small reports that filter by event type and take the only fields that are relevant for the event. For example: report 1 for event A, report 2 for event B, and so on.. In order to improve the performance I want to use a summary index. I read the documentation and I'm doing the following: Create a report in: "Searches, Reports, and Alerts" the query is : index=myIndex source=mySource sourcetype=_json | rename… | table … | stats values(*) as * by TimeStamp,source | lookup lookUp_table_toAdd_Fields.csv source AS source | sistats values(*) as * by TimeStamp,source Enable summary index   Scheduled the report. Run daily for 24 hours.   Create a new search to extract data saved in the index: index="summary" source="SummaryIndex_Main" | stats values(*) as * by TimeStamp,source | table * Data range- only 6 days  (data between 1.8-6.8, only 987,771 events) Results: When it runs it looks like it collecting the data but when the run finish, the statistics tab contain no results and I get the error:" The following error(s) occurred while the search ran. Therefore, search results might be incomplete." I don’t have permission to change the config files and I'm not sure what I'm doing wrong. Please help!!   *Note- I need to extract all the original fields from the main query this is why I use sistats and stats (and not collect). And I have no aggregation. Just need to extract the data and be aware of overlaps. * Relevant questions that I have posted: https://community.splunk.com/t5/Reporting/Summary-index-for-non-aggregated-data-How-to-read-only-delta/m-p/653550#M12166 https://community.splunk.com/t5/Reporting/Why-sistats-doesn-t-work-after-lookup/m-p/653864#M12170 Thanks, Maayan
some issues with short id we cant able to search through incident review, actually the paloalto saor is integrated with splunk, some incidents will changing their status and short id creating from xs... See more...
some issues with short id we cant able to search through incident review, actually the paloalto saor is integrated with splunk, some incidents will changing their status and short id creating from xsoar its reflecting in splunk but we cant able to search with that short id in incident review. only short id created by Xsoar we can't able to searchable remaining shortid in splunk can be searchable . Please provide me how to resolve this issue 
Hi splunkers  Why when I do the following query if it gives me the correct data   Query | inputlookup append=t mitre_lookup | foreach TA00* [ | lookup mitre_tt_lookup technique_id as <<FIE... See more...
Hi splunkers  Why when I do the following query if it gives me the correct data   Query | inputlookup append=t mitre_lookup | foreach TA00* [ | lookup mitre_tt_lookup technique_id as <<FIELD>> OUTPUT technique_name as <<FIELD>>_technique_name | eval <<FIELD>>_technique_name=mvindex(<<FIELD>>_technique_name, 0) ] | eval codes_tech = "T1548, T1134,T1547" | makemv delim=", " codes_tech | eval TA0004 = if(mvfind(codes_tech, TA0004) > -1, TA0004." Es aqui", TA0004) Result:   But when the data comes from a stats result it doesn't search for the values:   Query: index=notable search_name="Endpoint - KTH*" |fields technique_mitre |stats count by technique_mitre |eval tech_id=technique_mitre |rex field=tech_id "^(?<codes_tech>[^\.]+)" |stats count by codes_tech |makemv delim=", " codes_tech |mvexpand codes_tech |stats count by codes_tech | inputlookup append=t mitre_lookup | foreach TA00* [ | lookup mitre_tt_lookup technique_id as <<FIELD>> OUTPUT technique_name as <<FIELD>>_technique_name | eval <<FIELD>>_technique_name=mvindex(<<FIELD>>_technique_name, 0) ] | eval codes_tech = codes_tech | eval TA0004 = if(mvfind(codes_tech, TA0004) > -1, TA0004." Es aqui", TA0004) Result: I would really appreciate your support
Hello, Currently, SPLUNK is installed in one of my AWS EC2 Instances.  It's a free 60-day trial version, for my personal use to test and do some research. I have done a lot of tasks including some r... See more...
Hello, Currently, SPLUNK is installed in one of my AWS EC2 Instances.  It's a free 60-day trial version, for my personal use to test and do some research. I have done a lot of tasks including some research work within that SPLUNK platform. Are there any ways I can convert this trial version to paid License version, so I can continue using this SPLUNK platform after 60 days? Thank you so much!  
Hello, Do we have any SPLUNK recommended maximum size of a single source file for UFs to push? I know maximus size of Lookup is 500MB. But for SPLUNK UF based data ingestion, I have a few source fi... See more...
Hello, Do we have any SPLUNK recommended maximum size of a single source file for UFs to push? I know maximus size of Lookup is 500MB. But for SPLUNK UF based data ingestion, I have a few source files need to be ingested every day using UF and each of the size of source files is around 2.2 GB. Do you have any recommendations? Thank you so much.
I have 2 lookup files as lookup1.csv and lookup2.csv lookup1.csv has the data as below name, designation, server, ipaddress, dept tim, ceo, hostname.com, 1.2.3.5, alldept jim, vp, myhost.com, 1... See more...
I have 2 lookup files as lookup1.csv and lookup2.csv lookup1.csv has the data as below name, designation, server, ipaddress, dept tim, ceo, hostname.com, 1.2.3.5, alldept jim, vp, myhost.com, 1.0.3.5, marketing pim, staff, nohost.com, 4.0.4.8, hr lookup2.csv has the data as below cidr, location 1.2.3.0/24, dc 1.0.3.0/24, carolina 3.4.7.0/24, tx I would like to lookup for the field ipaddress in lookup1.csv with the field cidr in lookup2.csv for the first 3 digits as in x.x.x and get the location field if they match. If the ipaddress doesn't match the first 3 digit of cidr , the location should be marked as "unknown". Expected o/p tim, ceo,1.2.4.5, dc jim, vp, 1.0.3.5, carolina pim, staff, 4.0.4.8, unkown I am looking for the search command in splunk using the 2 lookup tables. Thanks in advance. My search so far has not yield any good results but I am still working on it.
i'm trying to grab all items based on a field. the field is a "index" identifier from my data. but i only want the most recent one in my dashboard. Since eval doesn't have a max function ... e.g.  ... See more...
i'm trying to grab all items based on a field. the field is a "index" identifier from my data. but i only want the most recent one in my dashboard. Since eval doesn't have a max function ... e.g.   eval max_value = max(index) | where index=max_value   is eventstats the only way to do this? These seems like a lot of overhead vs just getting a max value   eventstats max(index) as max_value | where index=max_value   is there another way to do this betteR?
From the below sample logs we need rex for  1. "appl has successfully completed all authentication flows." 2. "Login complete"   2023-01-11 12:34:22,678 DITAU [taskstatus-8] {DIM: hgtijNMHy67v | ... See more...
From the below sample logs we need rex for  1. "appl has successfully completed all authentication flows." 2. "Login complete"   2023-01-11 12:34:22,678 DITAU [taskstatus-8] {DIM: hgtijNMHy67v | DIS: aHJyikjI | DIC: HY56GFT6 | PI: 678.89.987 | ND: ks | APP: |OS: iOS-1-7.1 | REV" } APPDA - 87687654356789 - appl has successfully completed all authentication flows. 2023-01-11 12:34:22,678 DITAU [taskstatus-8] {DIM: hgtijNMHy67v | DIS: aHJyikjI | DIC: HY56GFT6 | PI: 678.89.987 | ND: ks | APP: |OS: iOS-1-7.1 | REV" } APPDA - 87687654356789 - appl has successfully completed all authentication flows. 2023-01-11 12:34:22,678 INFO [taskstatus-8] {DIM: hgtijNMHy67v | DIS: aHJyikjI | DIC: HY56GFT6 | PI: 678.89.987 | ND: ks | APP: | Req: POST /ai/v2/api/assert | OS: iOS-16.6 | VER: 23.4.0} AUDIT - Login complete 2023-01-11 12:34:22,678 INFO [taskstatus-8] {DIM: hgtijNMHy67v | DIS: aHJyikjI | DIC: HY56GFT6 | PI: 678.89.987 | ND: ks | APP: | Req: POST /ai/v2/api/assert | OS: iOS-16.6 | VER: 23.4.0} AUDIT - Login complete  
Hello Splunkers we recently upgraded our splunk distributed deployment from 8.2.9 to 9.0.5.1. After upgrade our splunk servers are started show under unknown category.  it's pretty much impacti... See more...
Hello Splunkers we recently upgraded our splunk distributed deployment from 8.2.9 to 9.0.5.1. After upgrade our splunk servers are started show under unknown category.  it's pretty much impacting all the dashboards in monitoring console. below is the error  "Streamed search execute failed because: Error in 'prerest' command: You do not have a role with the rest_access_server_endpoints capability that is required to run the 'rest' command with the endpoint=/services/server/status/limits/search-concurrency?count=0. Contact your Splunk administrator to request that this capability be added to your role.." please let me know what can i do to bring the views back to normal.    Thanks in advance
Good afternoon everyone,   I'm a fairly new Splunk user so apologies for anything I miss while writing this up. For some reason our dashboard for the Q-Audit App, Qmulos, is no longer working. The ... See more...
Good afternoon everyone,   I'm a fairly new Splunk user so apologies for anything I miss while writing this up. For some reason our dashboard for the Q-Audit App, Qmulos, is no longer working. The dashboard used to while processing auditing changes for the last 7 seven days, would at least show the data that was already processed while loading the rest of the week. Now while searching it will only show 0 of however many events matched, until eventually resulting in no results found. I cannot even use the query to find the old data from weeks ago when it did work successfully. The dashboard was created by another user who no longer works here. I tried cloning the dashboard myself to see if it was possibly a permissions issue but that did not resolve it. The dashboard itself was essentially auditing users initializing applications in a graph of who initialized what application and how they did so. I cannot think of any possibly changes we made that would cause this.  Dashboard query: | tstats prestats=true summariesonly=false allow_old_summaries=false count as "count(Process)" FROM datamodel=Q_Application_State WHERE (nodename=Application_State.tag"=*) BY _time span=1s, host, "Application_State.process", "Application_State.src_user", "Application_State.user" | stats dedup_splitvals=t count AS "count(Process)" by _time, host, Application_State.process, Application_State.src_user, Application_State.user
I have a HF that was recently expanded in terms of its RAM capacity. Ever since, there has been an issue with REST commands off specific endpoint. Example: | rest splunk_server=splunk-hf1 /services... See more...
I have a HF that was recently expanded in terms of its RAM capacity. Ever since, there has been an issue with REST commands off specific endpoint. Example: | rest splunk_server=splunk-hf1 /services/server/info (Works great and gives me the expected results) | rest splunk_server_group=dmc_group_* /services/server/status/resource-usage/hostwide (When I run it it acts like some of my servers, including a HF has 0 memory)