All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Event: [{"hostname":"BBBBBBBBB","contentSourceName":"Authored","contentSourceType":"Authored","incremental":true,"skipCrawl":false,"isBulk":false,"startTime":1616802303335,"endTime":1616802355772,"re... See more...
Event: [{"hostname":"BBBBBBBBB","contentSourceName":"Authored","contentSourceType":"Authored","incremental":true,"skipCrawl":false,"isBulk":false,"startTime":1616802303335,"endTime":1616802355772,"reportStatus":"Success","documentsFound":20,"documentsFailed":0,"documentsSucceeded":16,"documentsFiltered":0,"documentsUnchanged":0,"contentProcessed":16,"contentAdded":0,"contentUpdated":0,"contentDeleted":0,"pdfSlices":0,"pdfDocCount":0,"exceptionCount":0,"generalExceptionCount":0,"warningCount":0,"processorFailureCount":0,"generalizedFailureCount":0,"heritrixErrorCount":0,"duplicateItemCount":0,"duplicateReportRelativeFilename":null,"jobId":-1}, {"hostname":"AAAAAAAA","contentSourceName":"Authored","contentSourceType":"Authored","incremental":true,"skipCrawl":false,"isBulk":false,"startTime":1616801520297,"endTime":1616801578765,"reportStatus":"Success","documentsFound":40,"documentsFailed":0,"documentsSucceeded":28,"documentsFiltered":0,"documentsUnchanged":0,"contentProcessed":28,"contentAdded":0,"contentUpdated":0,"contentDeleted":0,"pdfSlices":0,"pdfDocCount":0,"exceptionCount":0,"generalExceptionCount":0,"warningCount":0,"processorFailureCount":0,"generalizedFailureCount":0,"heritrixErrorCount":0,"duplicateItemCount":0,"duplicateReportRelativeFilename":null,"jobId":-1}, {"hostname":"ZZZZZZZZZ","contentSourceName":"Authored","contentSourceType":"Authored","incremental":true,"skipCrawl":false,"isBulk":false,"startTime":1616797920257,"endTime":1616797999256,"reportStatus":"Success","documentsFound":104,"documentsFailed":0,"documentsSucceeded":59,"documentsFiltered":0,"documentsUnchanged":0,"contentProcessed":59,"contentAdded":0,"contentUpdated":0,"contentDeleted":0,"pdfSlices":0,"pdfDocCount":0,"exceptionCount":0,"generalExceptionCount":0,"warningCount":0,"processorFailureCount":0,"generalizedFailureCount":0,"heritrixErrorCount":0,"duplicateItemCount":0,"duplicateReportRelativeFilename":null,"jobId":-1}, {"hostname":"YYYYYYYY","contentSourceName":"Authored","contentSourceType":"Authored","incremental":true,"skipCrawl":false,"isBulk":false,"startTime":1616794883261,"endTime":1616795120383,"reportStatus":"Success","documentsFound":236,"documentsFailed":3,"documentsSucceeded":121,"documentsFiltered":0,"documentsUnchanged":0,"contentProcessed":121,"contentAdded":0,"contentUpdated":0,"contentDeleted":0,"pdfSlices":0,"pdfDocCount":0,"exceptionCount":0,"generalExceptionCount":0,"warningCount":0,"processorFailureCount":3,"generalizedFailureCount":3,"heritrixErrorCount":0,"duplicateItemCount":0,"duplicateReportRelativeFilename":null,"jobId":-1}, {"hostname":"XXXXXXXX","contentSourceName":"Authored","contentSourceType":"Authored","incremental":true,"skipCrawl":false,"isBulk":false,"startTime":1616742071025,"endTime":1616794342113,"reportStatus":"Success","documentsFound":83004,"documentsFailed":640,"documentsSucceeded":81533,"documentsFiltered":0,"documentsUnchanged":0,"contentProcessed":81528,"contentAdded":0,"contentUpdated":0,"contentDeleted":0,"pdfSlices":0,"pdfDocCount":0,"exceptionCount":0,"generalExceptionCount":0,"warningCount":0,"processorFailureCount":640,"generalizedFailureCount":640,"heritrixErrorCount":0,"duplicateItemCount":0,"duplicateReportRelativeFilename":null,"jobId":-1}] ================================ We get above data in one event. I like to extract few data from above event in dashboard in table format: Hostname | contentSourceName | incremental| startTime| endTime|Duration| reportStatus | documentsFound | documentsFailed | The extra column needs to be add is i.e. "Duration" that can be extracted from StartTime and EndTime. Start date and end date is in Unix epoch time that needs to be converted into human readable format. Please help
Hi everyone, There are some .js files under the pdf directory. I assume they are used when we export the dashboards to pdf. It would be helpful if someone can share the information about the gensvg... See more...
Hi everyone, There are some .js files under the pdf directory. I assume they are used when we export the dashboards to pdf. It would be helpful if someone can share the information about the gensvg.js and gensinglevalue.js files that are present under share/pdf, What for they are used and their roles in splunk   Thanks
Hi, I read from splunk docs that we should avoid using wildcards `*` in the middle of a string. Now, does this apply to  `%` wildcard used in `like()` too ? Ex:  like(some_field ,"abc%def") From... See more...
Hi, I read from splunk docs that we should avoid using wildcards `*` in the middle of a string. Now, does this apply to  `%` wildcard used in `like()` too ? Ex:  like(some_field ,"abc%def") From my testing it seems , `%` is able to match punctuations too unlike `*`.
Hi There I am new to splunk and trying to figure out a way to make the below search faster :  index=pan_logs sourcetype="pan:threat" | search NOT (client_ip=xxxx OR client=xxxx OR client_ip=xxxx OR ... See more...
Hi There I am new to splunk and trying to figure out a way to make the below search faster :  index=pan_logs sourcetype="pan:threat" | search NOT (client_ip=xxxx OR client=xxxx OR client_ip=xxxx OR client_ip=xxxx OR client_ip=xxxx OR client_ip=xxxx ) | eval field=split(_raw, ",") | eval type=mvindex(field,3), subtype=mvindex(field,4), src_ip=mvindex(field,7), dst_ip=mvindex(field,8), nat_src=mvindex(field,9), nat_dst=mvindex(field,10), rule_name=mvindex (field,11), app=mvindex(field,14),src_zone=mvindex(field,16),dst_zone=mvindex(field,17), ingress_if=mvindex(field,18), egress_if=mvIndex(field,19), log_action=mvindex(field ,20), src_port=mvindex(field,24), dst_port=mvindex(field,25), proto=mvindex(field,29), action=mvindex(field,30), url=mvindex(field,31), threat_id=mvindex(field,32), cat =mvindex(field,33), sev=mvindex(field,34), direction=mvindex(field,45) | search subtype!=url action=allowed OR action=alert OR action=sinkhole url!="\"saw.dll\"" | table _time type subtype src_ip dst_ip nat_src nat_dst rule_name app src_zone dst_zone ingress_if egress_if log_action src_port dst_port proto action url threat_id cat sev direction index | fields "_time", "action", "app", "cat", "direction", "dst_ip", "dst_port", "dst_zone", "egress_if", "index", "ingress_if", "log_action", "nat_dst", "nat_src", "proto", "rule_name", "sev", "src_ip", "src_port", "src_zone", "subtype", "threat_id", "type", "url" Thank you so much in advance .
Hi, I am upgrading an indexer from 7.2.6 to 7.3.9 on a Windows 2012 R2 server. During the installation near the end of installation I see that the installer is rolling back the upgrade. Where do I ... See more...
Hi, I am upgrading an indexer from 7.2.6 to 7.3.9 on a Windows 2012 R2 server. During the installation near the end of installation I see that the installer is rolling back the upgrade. Where do I find out why the installation is failing? I've done upgrades in the past, but never have I seen it rollback an installation. Thx.
How do I check if my Splunk environment is set for Search Head pooling? We have SH clustering all set up and am preparing a HA and DR. Any ideas in this area to help with HA / DR is appreciated.
Hello everyone. I am trying to deploy ESS, but I having some trouble with the notable events. I can not see results at the Incident Review dashboard and this is because the notable event index is e... See more...
Hello everyone. I am trying to deploy ESS, but I having some trouble with the notable events. I can not see results at the Incident Review dashboard and this is because the notable event index is empty I created a correlation search  and as part of the adaptative response action a notable event had to be create. But is not working, so I decided to run the search from de alert and there I can see results.  Also I followed the next guide  https://docs.splunk.com/Documentation/ES/6.5.0/Admin/Troubleshootnotables  And I found this :   As you can see everything looks ok.  It is important to mention that some searches have been skipped, but not all of them and also I didn't change anything at the Splunk_SA_CIM, read that sometimes that can be a problem, but isn't my case. Here a let a image of the result of this search index=_internal sourcetype=scheduler   I really don't know what is happening.   I will really appreciate the help. Regards    
Hello All,    I've been trying to create a basesearch for my dashboard. I have included all the fields that both queries have in common, labeled the first basesearch as id and the second as a base.... See more...
Hello All,    I've been trying to create a basesearch for my dashboard. I have included all the fields that both queries have in common, labeled the first basesearch as id and the second as a base. I keep getting a "Error in 'eval' command: Failed to parse the provided arguments. Usage: eval dest_key=expression". I also have a question on the tokens are they only supposed to be on the first query under id basesearch?  What is wrong here with my basesearches here? thanks in advance.      <form> <label>Cloned Dashboard </label> <search id="basesearch"> <----(This is the start of my base search) <query> (index=dmx_rapper.xmn $tok_eco_alias$ (team=dev staging="Test" ) OR ( team=Pro )) | eval HRofstage=case(stage="SentStatus", HRStamp), | eval ProPriority=case(team="Pro", lookupService), sentToProHR=case(Type="sentToPro", HRLogged) | stats earliest(sentToProHR) as sentToProHR latest(HRofstage) as HRofstage values(Duration) as Duration values(lookupService) as lookupService dc(Identifier) as TotalDocs values(Total) as Total values(ProPriority) as Pro_Priority by Identifier | where Pro_Priority="$tok_rate$" | eval startTime = strptime(sentToProHR,"%Y-%m-%d %H:%M:%S.%q"), endTime=strptime(HRofstage,"%Y-%m-%d %H:%M:%S.%6N") | where isNotNull(sentToProHR) AND isNotNull(HRofstage) | eval Duration = ((endTime-startTime)/60) | eval ServiceValue=case(lookupService="Low", 3600, lookupService="Medium", 2880, lookupService="High", 1440) </query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <title>Service Value Success Count and Percentage </title> <search base="basesearch"> <----(2nd query for baseserach) <query> search | eval ServiceValue=if(Duration&lt;=ServiceValue, "Success", "Failure") | eval Total=case(ServiceValue="Success", Identifier) | stats dc(Total) as ServiceValue dc(Identifier) as Totals_Received | eval Percentage=round((ServiceValue/Total_Received)*100) | eval ServiceValue=tostring(ServiceValue,"commas") . " (" .Percentage."%" . ")" | table ServiceValue </query> </search>  
I have recently deploy Splunk in a distributed environment with the following elements: Management Server (SH Cluster Deployer, Deployment Server, Index Cluster Master) Search Head Cluster (3 Sear... See more...
I have recently deploy Splunk in a distributed environment with the following elements: Management Server (SH Cluster Deployer, Deployment Server, Index Cluster Master) Search Head Cluster (3 Search Heads) Indexer Cluster (3 Indexers) The add-on contains elements that should be deployed to the search heads as well as elements that should be deployed to the indexers, however, because some of these configurations also define the inputs, and the input is an API call to Okta API, I want to avoid the scenario where all three indexers are pulling the same data from the Okta API. Is there any guidance from Okta regarding the best way for this add-on to be deployed in an environment with clustered search heads and indexers? Should I simply install the add-on on a single search head and indexer rather than use the cluster bundle method? Or perhaps it would be better to utilize a Heavy Forwarder. Any guidance is greatly appreciated.  
Hi, I am a Splunk newbie, I am attempting to create an alert that will notify if loadAvg1mi is sustained above 20 for more than one hour.   This is how I started: index =os host=myserver sourcetype... See more...
Hi, I am a Splunk newbie, I am attempting to create an alert that will notify if loadAvg1mi is sustained above 20 for more than one hour.   This is how I started: index =os host=myserver sourcetype =vmstat loadAvg1mi |where loadAvg1mi>20 | timechart avg(loadAvg1mi) But I have no idea how to add the sustained over 1 hour. Any ideas would be apprecieated. Thanks
I find this very ridiculous considering that the terminology used is not accurate in what it is doing. When doing a search to hide data from being searchable, it calls it delete. Well it's not deleti... See more...
I find this very ridiculous considering that the terminology used is not accurate in what it is doing. When doing a search to hide data from being searchable, it calls it delete. Well it's not deleting the data from the logs so its essentially a misnomer calling the function delete. Instead either call it hide or removeFromSearch - some other name that gives it meaning. I've spent the last 2 hours searching how to reclaim disk space without removing the primary index - its not possible and people have been requesting this kind of utility for years! Naming conventions mean a lot. They carry a lot of meaning and seeing this kind of misnomer is really frustrating.
Check out the latest release of Splunk Cloud Platform: The new Dashboard Studio offers a dashboard-building experience with advanced visualization tools and fully customizable layouts to easily cre... See more...
Check out the latest release of Splunk Cloud Platform: The new Dashboard Studio offers a dashboard-building experience with advanced visualization tools and fully customizable layouts to easily create visually-compelling, interactive dashboards with an intuitive UI.  Cloud-Cloud Federated Search brings users the ability to search across Cloud deployments - from one search bar using enhanced search commands.  Cloud admins now have more detailed user role setting controls providing control over the age of events shown in search results allowing more fine tuned management of workload usage by user. Additionally, self service index deletion is now possible without requiring a rolling restart. Splunk Secure Gateway (SSG) now available packaged with Splunk Cloud Platform allowing users to easily connect mobile to Splunk. SSG and Spacebridge are SOC 2 Type 2, ISO 27001, PCI and HIPAA compliant.  Read our blog and the release notes for more details!
Hi, We are using Splunk Java SDK for connecting to Splunk server to perform data ingestion and search operations.  We are performing several search operations and we are getting this error. splunk... See more...
Hi, We are using Splunk Java SDK for connecting to Splunk server to perform data ingestion and search operations.  We are performing several search operations and we are getting this error. splunk.HttpException: HTTP 401 -- call not properly authenticated. We checked, the credentials and  port are correct.   Any help to fix this issue would be appreciated.   Thanks. Hassan.  
Hello I have two similar strings that I need to differentiate.  These are the key words in the String  1. Special  2 Specialist  When they come into Splunk it comes in as a command: EX: "Alter U... See more...
Hello I have two similar strings that I need to differentiate.  These are the key words in the String  1. Special  2 Specialist  When they come into Splunk it comes in as a command: EX: "Alter User Special" "Alter User Specialist"  Currently I am using these queries:   host=*| eval SPECIALIST=if(like(EVNTCOMMAND, "% SPECIALIST%"),1,0)| chart sum(SPECIALIST)     host=*| eval SPECIAL=if(like(EVNTCOMMAND, "% SPECIAL%"),1,0)| chart sum(SPECIAL)   I need the  % after Special and Specialist because sometimes there is more data after those strings. Any Suggestions? Thank you, Marco
Hi All, My splunk is running on version 7.3.6 the issue is that the scheduled report emails doesn't have csv format result if the results returned are zero. Any help is appreciated . pdf file is se... See more...
Hi All, My splunk is running on version 7.3.6 the issue is that the scheduled report emails doesn't have csv format result if the results returned are zero. Any help is appreciated . pdf file is sent even if the results are zero. Thank you Anu   @isoutamo 
Good Morning,   Trying to determine if we need to upgrade to the latest plugin version between pagerduty and splunk. We're currently on version 2.0.3 and we utilize Splunk Enterprise Version:7.3.8.... See more...
Good Morning,   Trying to determine if we need to upgrade to the latest plugin version between pagerduty and splunk. We're currently on version 2.0.3 and we utilize Splunk Enterprise Version:7.3.8. The issue we're experiencing is that anytime a valid json is input into an alert/query it never makes its way to pagerduty from splunk. We've verified the alerts work because we also have our team email CC'd on the alerts themselves. Pagerduty support has also said they never received alerts to that specific service integration key.   Is this a plugin issue? and is this fixed with the latest versions that have came out? We're trying to determine if we upgrade, that this will resolve the issue.
Hi, I am trying to run dbxquery command but it keeps throwing the below error.  I have configured the database connection on a heavy forwarder on the DB Connect add-on and I am running the query on ... See more...
Hi, I am trying to run dbxquery command but it keeps throwing the below error.  I have configured the database connection on a heavy forwarder on the DB Connect add-on and I am running the query on a search head. I have verified that the username and password are working fine. Can someone help me understand what I am missing? error: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: ORA-02002: error while writing to audit trail ORA-55917: Table flush I/O failed for log ID: 3 bucket ID: 0 ORA-55917: Table flush I/O failed for log ID: 3 bucket ID: 0 ORA-01918: user 'ORA-01918: user '' does not exist ORA-28001: the password has expired ' does not exist ORA-28001: the password has expired   query: | dbxquery maxrows=200000 query="SELECT SERVER_NAME, HOST_NAME, MEMORY, OS, SERVER_ENV, TCPIPADDRESS, PROJECT, LOCATION, BUILDING_CODE, COUNTRY, MAKE, MODEL, BOX, GRID, AC_ASSET_TAG, STATUS, OS_SG, DB_SG, SUPPORTCONTRACTS, MC_CS_IP, MC_TS_IP, SERVER_CONTACT, SERVER_CATEGORY FROM \"ASSETMANAGER\".\"AMASSET_SUPPORT_VIEW\"" connection="FCAMS" Thanks  
Hi Everyone, I have one requirement. I want when there is No DATA in the dashboard currently its showing like "NO RESULTS FOUND" for all the dashboards. I want to show some custom messages like "C... See more...
Hi Everyone, I have one requirement. I want when there is No DATA in the dashboard currently its showing like "NO RESULTS FOUND" for all the dashboards. I want to show some custom messages like "Contact to Team" How can I show that. Can someone guide me.  
Hello all,   We are successfully creating a Sankey Visualization of our data, however when we try to expand how many rows are being visualized -- the stats table below appears to be covering up the... See more...
Hello all,   We are successfully creating a Sankey Visualization of our data, however when we try to expand how many rows are being visualized -- the stats table below appears to be covering up the visualization.  I am new to Splunk, am I missing something obvious about how to hide this table so we can explore larger visualizations?     Please see attached - there should be 500 rows being drawn, you can see that there are edegs/lines arcing down and behind the table  
Hello, we are working on a log-forwarder solution on base of the java-based logging library from Splunk. In the splunk-dev website it is documented, that the splunk.jfrog.io repository is to use ... See more...
Hello, we are working on a log-forwarder solution on base of the java-based logging library from Splunk. In the splunk-dev website it is documented, that the splunk.jfrog.io repository is to use in case of maven based projects. For this purpose I configured the repository and the dependency as following:   <repositories> ... <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <url>https://splunk.jfrog.io/artifactory/libs-releases</url> </repository> </repositories> ... <dependencies> ... <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.8.0</version> </dependency> </dependencies>     When I run the maven build, then I get the following error: Failed to read artifact descriptor for com.splunk.logging:splunk-library-javalogging:jar:1.8.0: Could not transfer artifact com.splunk.logging:splunk-library-javalogging:pom:1.8.0 from/to splunk-artifactory (https://splunk.jfrog.io/artifactory/libs-releases Authentication failed for https://splunk.jfrog.io/artifactory/libs-releases/com/splunk/logging/splunk-library-javalogging/1.8.0/splunk-library-javalogging-1.8.0.pom 401 Unauthorized -> [Help 1] Before I switched to the new repository I used the repo.spring.io repository and this worked well until the common usage has been disallowed. Can you please help me to solve this issue? - Is the url, that I'm using (https://splunk.jfrog.io/artifactory/libs-releases) correct? - Do I need an authentication to use the location as maven dependency? Thanks a lot