All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,   For the longest time I have been loading csv files into my splunk instance.  Then today I get this: My csv files seem to be not loading correctly.  I tried all the suggestions already... See more...
Hello,   For the longest time I have been loading csv files into my splunk instance.  Then today I get this: My csv files seem to be not loading correctly.  I tried all the suggestions already listed, none seem to help.  Any ideas before I call tech support?   Thanks  
I have an alerts index which has a data.rule.name field containing the following values: COVID-19 linked Cyber Attacks (Social Media) 2 40%   Global Trends, Trending Targets 1 20%   ... See more...
I have an alerts index which has a data.rule.name field containing the following values: COVID-19 linked Cyber Attacks (Social Media) 2 40%   Global Trends, Trending Targets 1 20%   Locations by Risk Level 1 20%   Target Trends, Trending Targets in Watch List 1 20%   I would like to filter events to only include ones where data.rule.name begins with "Target Trends..." My SPL is as follows: index=alerts | where like(data.rule.name, "Target Trends.%") This produces 0 events. Am I using this function wrong?  
I am doing an audit on Splunk alerts.  One of the things I am looking for is if the Alert name is in the subject of the email that gets sent.   I run the search at the bottom of this message and pars... See more...
I am doing an audit on Splunk alerts.  One of the things I am looking for is if the Alert name is in the subject of the email that gets sent.   I run the search at the bottom of this message and parse that.  There are hundreds of alerts and most of them have one of these two settings.   "action.email.subject.alert": "$name$" "action.email.subject": "$name$" I don't know the difference between the two, but they seem to match proper alert setups in the GUI.  There are a few dozen alerts that don't return either of these though.  When I look at one of those alerts in the GUI it has the correct setting for the alert.   The email subject is $name$.  Why don't these Alerts that seem to be configured correctly return the "action.email.subject" field?   Thanks,  I have even run the search where I return all fields and can't find another that looks like it would be the subject field. Search: |rest/servicesNS/-/-/saved/searches | search alert.track=1 | fields title Action.email.to Action.email.subject Action.email.subject.alert  Splunk 7.3.3
Why oldest and most current data in _audit index is current via CLI on Deployment server & not current via GUI? The difference between date are over 4 months.
Hi All, Could you help me in knowing what is the use of Wrapresults tab under format visualization in splunk Also wat impact we will have for our data ,if we make this value to Yes..    
I  need help in extracting ID from nested JSON data in Splunk for including this in report. Sample data: {"preview":false,"result":{"_raw":"{"severity":"INFO","logger":"eu.notas.fns.###.utility.Log... See more...
I  need help in extracting ID from nested JSON data in Splunk for including this in report. Sample data: {"preview":false,"result":{"_raw":"{"severity":"INFO","logger":"eu.notas.fns.###.utility.LoggingUtil","thread":"qtp1951963537-1006","message":{"###RequestId":"<<>>","msgDesc":"Image id Successfully ","fileName":null,"errorDesc":null,"requestType":"API","destination":"###_SERVICES","errorCode":null,"source":"EXTERNAL_issue-in","externalRequestId":"<<>>","responseCode":null,"Id":"<<>>","service":"notas-###-issue-in-data-service","stackTrace":null}}","_time":"2021-04-28T11:47:51.318+0200","host":"notas-###-issue-in-data-service-147-qthsj","index":"###_app_prod","linecount":"1","logger":"eu.notas.fns.###.utility.LoggingUtil","message.destination":"###_SERVICES","message.errorCode":"null","message.errorDesc":"null","message.externalRequestId":"<<>>","message.fileName":"null","message.Id":"<<>>","message.###RequestId":"<<>>","message.msgDesc":"Image id Successfully ","message.requestType":"API","message.responseCode":"null","message.service":"notas-###-issue-in-data-service","message.source":"EXTERNAL_issue-in","message.stackTrace":"null","punct":"{"":"","":".....","":"-","":{"":"----","":"_____",","severity":"INFO","source":"###","sourcetype":"###-prod-log","splunk_server":"no1-psplunkidx-14","thread":"qtp1951963537-1006","unix_category":"all_hosts","unix_group":"default"}}
We are ingesting network events into a log file. And it looks like below  Network_Event=ThresholdViolation Network_EventDesc=A Threshold Violation event has been cleared. (Profile Name: INTERFACE -... See more...
We are ingesting network events into a log file. And it looks like below  Network_Event=ThresholdViolation Network_EventDesc=A Threshold Violation event has been cleared. (Profile Name: INTERFACE -VRT Interfaces, Rule Name: Network::: Bandwidth Utilization Out >85% 30 min out of 1 hr, Reason for clearing: Clear Threshold criteria has been satisfied for Event Rule.) Network_ItemName=vpn-0-ipv4-if-ge0/3 Network_EvProp_AlarmClearRuleDetail=UtilizationOut < 75.0 Network_ItemParentId=123456 Network_EvProp_ThresholdProfileFolderId=12334 Network_EventOccurredOn=Thu Apr 10 22:55:00 PDT 2021 Netwok_ItemDesc=Viptela-Interface-AL1-123422 Network_EventState=CLOSED Network_ItemName=vpn-0-ipv3-313 Network_EvProp_AlarmClearRuleDetail=UtilizationOut < 75.0 Network_EventSubType=Cleared So when we monitor the above log and move to indexer and on splunk it will split above all lines into 2 event. Because of the timestamp Network_EventOccurredOn=Thu Apr 10 22:55:00 PDT 2021 coming inthe middle. 1st event:  Network_Event=ThresholdViolation Network_EventDesc=A Threshold Violation event has been cleared. (Profile Name: INTERFACE -VRT Interfaces, Rule Name: Network::: Bandwidth Utilization Out >85% 30 min out of 1 hr, Reason for clearing: Clear Threshold criteria has been satisfied for Event Rule.) Network_ItemName=vpn-0-ipv4-if-ge0/3 Network_EvProp_AlarmClearRuleDetail=UtilizationOut < 75.0 Network_ItemParentId=123456 Network_EvProp_ThresholdProfileFolderId=12334 2nd event: Network_EventOccurredOn=Thu Apr 10 22:55:00 PDT 2021 Netwok_ItemDesc=Viptela-Interface-AL1-123422 Network_EventState=CLOSED Network_ItemName=vpn-0-ipv3-313 Network_EvProp_AlarmClearRuleDetail=UtilizationOut < 75.0 Network_EventSubType=Cleared Tried adding below props.conf but still it didnt worked. [SOURCETYPE] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+)Network_EventType=ThresholdViolation   Please help me on this. I want above 2 event as single event.
Hello, I have scheduled a Splunk report that currently notifies me via email if the event count is greater than 3000 and is scheduled to run on hourly basis. Now when the event count is 0, I still g... See more...
Hello, I have scheduled a Splunk report that currently notifies me via email if the event count is greater than 3000 and is scheduled to run on hourly basis. Now when the event count is 0, I still get an email from Splunk with 'No results found' message in it. How do I stop getting these emails when the event count is 0? My requirement is to get emails only for events greater than 3000. 
I am on Day 2 with Splunk. I am trying to get the average number of records by Day of the Week (Mon, Tue, Wed, etc) of the specified timespan.  I can get the total counts by Day of the Week, but I... See more...
I am on Day 2 with Splunk. I am trying to get the average number of records by Day of the Week (Mon, Tue, Wed, etc) of the specified timespan.  I can get the total counts by Day of the Week, but I can't seem to get the average number of transactions per Day of the Week. This gets me the total number of transactions for each day of the week in that timespan: index=xxxxxxxxxxxxx | eval day=strftime(_time,"%a") | stats count by day How do I get this to average out so that if I have 1000 records for Mondays and I have 4 Mondays in that timespan then I get the value to be 250? I found this post from 2014, but I can't get that to work:  https://community.splunk.com/t5/Splunk-Search/day-of-the-week-average/m-p/142904
Hi community,   Our organisation has a splunk enterprise deployment to which I am trying to connect programatically via splunk-java-sdk. I have tested the below code on my local machine using SAM-... See more...
Hi community,   Our organisation has a splunk enterprise deployment to which I am trying to connect programatically via splunk-java-sdk. I have tested the below code on my local machine using SAM-cli (jetbrains AWS toolkit). The code works fine after setting :  HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2);  This is why I am certain that the credentials, host and port that I am using are the correct splunk rest credentials,host and port However, when I deploy the same code as an AWS lambda function, it returns the below mentioned Exception. The lambda function has a role with administrator privileges. Please help. Code: package helloworld; import java.io.*; import java.net.URL; import java.nio.charset.StandardCharsets; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.ListObjectsV2Result; import com.amazonaws.services.s3.model.ObjectMetadata; import com.amazonaws.services.s3.model.S3ObjectSummary; import com.splunk.*; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent; import com.amazonaws.services.lambda.runtime.events.APIGatewayProxyResponseEvent; /** * Handler for requests to Lambda function. */ public class App implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> { public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) { Map<String, String> headers = new HashMap<>(); headers.put("Content-Type", "application/json"); headers.put("X-Custom-Header", "application/json"); APIGatewayProxyResponseEvent response = new APIGatewayProxyResponseEvent() .withHeaders(headers); try { HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2); ServiceArgs loginArgs = new ServiceArgs(); loginArgs.setUsername("username"); loginArgs.setPassword("password"); loginArgs.setHost("host"); loginArgs.setPort(port); //loginArgs.setSSLSecurityProtocol(SSLSecurityProtocol.TLSv1_2); //Service.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2); //tried both these ways too Service service = Service.connect(loginArgs); service.login(); String mySearch = "search query"; JobArgs jobargs = new JobArgs(); jobargs.setExecutionMode(JobArgs.ExecutionMode.NORMAL); jobargs.setEarliestTime("-30m"); jobargs.setLatestTime("now"); Job job = service.getJobs().create(mySearch, jobargs); // Wait for the job to finish while (!job.isDone()) { Thread.sleep(500); } JobResultsArgs resultsArgs = new JobResultsArgs(); resultsArgs.setOutputMode(JobResultsArgs.OutputMode.CSV); InputStream results = job.getResults(resultsArgs); final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(<region>).build(); s3.putObject("bucket", "Object", results, new ObjectMetadata()); return response .withStatusCode(200) .withBody(""); } catch (Exception e) { e.printStackTrace(); } } }   My pom.xml: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>helloworld</groupId> <artifactId>HelloWorld</artifactId> <version>1.0</version> <packaging>jar</packaging> <name>A sample Hello World created for SAM CLI.</name> <properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-lambda-java-core</artifactId> <version>1.2.1</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-lambda-java-events</artifactId> <version>3.6.0</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.13.1</version> <scope>test</scope> </dependency> <dependency> <groupId>com.splunk</groupId> <artifactId>splunk</artifactId> <version>1.6.5.0</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-s3</artifactId> <version>1.11.837</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-sts</artifactId> <version>1.11.837</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-iam</artifactId> <version>1.11.837</version> </dependency> </dependencies> <repositories> <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <url>https://splunk.jfrog.io/splunk/ext-releases-local</url> </repository> </repositories> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.2.4</version> <configuration> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project>   Exception:   java.lang.RuntimeException: Connection timed out (Connection timed out) at com.splunk.HttpService.send(HttpService.java:409) at com.splunk.Service.send(Service.java:1293) at com.splunk.HttpService.post(HttpService.java:308) at com.splunk.Service.login(Service.java:1122) at com.splunk.Service.login(Service.java:1101) at com.splunk.Service.connect(Service.java:187) at helloworld.App.handleRequest(App.java:46) at helloworld.App.handleRequest(App.java:27) at lambdainternal.EventHandlerLoader$PojoHandlerAsStreamHandler.handleRequest(EventHandlerLoader.java:180) at lambdainternal.EventHandlerLoader$2.call(EventHandlerLoader.java:902) at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:340) at lambdainternal.AWSLambda.<clinit>(AWSLambda.java:63) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at lambdainternal.LambdaRTEntry.main(LambdaRTEntry.java:150) Caused by: java.net.ConnectException: Connection timed out (Connection timed out) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:666) at sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:264) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1334) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1309) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:259) at com.splunk.HttpService.send(HttpService.java:403) (edited)       
Is there a way to allow the Splunk login only for the authtype!=Splunk. I know that I have to specify authtype=SAML or LDAP in the authentication.conf. However, I don't want to have the option to ca... See more...
Is there a way to allow the Splunk login only for the authtype!=Splunk. I know that I have to specify authtype=SAML or LDAP in the authentication.conf. However, I don't want to have the option to call https://splunk:port/en-US/account/login?loginType=splunk. We plan to set a rule in AWS on our LoadBalancer that will prevent access to splunk:port/en-US/account/login?loginType=splunk. Is there another way to prevent loginType=splunk?
Hello respected members of the prestigious forum of Splunk I have been working with datetimes in splunk and it is making me insane... I am extracting the datetime of two separate events to later o... See more...
Hello respected members of the prestigious forum of Splunk I have been working with datetimes in splunk and it is making me insane... I am extracting the datetime of two separate events to later on subtract them, I have tried many ways to achieve this but I still dont have the results that I want... the format of datetime of the events look like this:  2020-07-28T09:42:33-06:00 I want to be able to have calculate difference in minutes between to events "join" by the field: error-code... Because of the way the system is configured the error "adult.mov" may appear twice or three times but I am only interested in the first time it appeared ... However, if this error has not appeared yet I want to record the current time instead... I am trying something like this:   | eval terrorXYU=if(match(_raw, "e_type_k"),datetime_c, null) | eval terroradult.mov=if(match(_raw, "mov"),datetime_c, null) | eval terroradult.mov= strptime(terroradult.mov,"%m/%d/%Y %H:%M:%S:%3N") | eval terrorXYU= strptime(terrorXYU,"%m/%d/%Y %H:%M:%S:%3N") | eval diff= terroradult.mov-terrorXYU   but I get nothing ins return I have tried a most of the codes in other posts but no luck at all.. thank you for helping me indeed
hello ,   I am getting error "Ran out of data while looking for end of header" for csv files parsing ,  On UF , i have    cat props.conf [csv] SHOULD_LINEMERGE = False pulldown_type = true I... See more...
hello ,   I am getting error "Ran out of data while looking for end of header" for csv files parsing ,  On UF , i have    cat props.conf [csv] SHOULD_LINEMERGE = False pulldown_type = true INDEXED_EXTRACTIONS = csv CHECK_FOR_HEADER = true KV_MODE = none category = Structured     cat inputs.conf [monitor:///home/lalit/reports] disabled = false sourcetype = csv   I have done same props.conf on indexer  Thanks Lalit
General question on how people might be baselining for alerts. At this time our alerting is over complicated and cumbersome, our basic alert setup is 150+ lines. I have looked at cutting this down a ... See more...
General question on how people might be baselining for alerts. At this time our alerting is over complicated and cumbersome, our basic alert setup is 150+ lines. I have looked at cutting this down a lot by using some prediction models which seems pretty good but wondering if there are any good articles or documents others have come across on this.
Hi, I have a dashboard that has charts for different categories. I want to group all charts that belong to one category in a box like structure. Each box should have charts for that panel(category) ... See more...
Hi, I have a dashboard that has charts for different categories. I want to group all charts that belong to one category in a box like structure. Each box should have charts for that panel(category) only, followed by the next box (category 2). I tried having a single <panel> for all the <chart>s but it automatically pushes each subsequent chart to the next row. I want multiple charts to appear in the same row.      
Is there a way to extract the "interesting fields" from search using the api or python sdk? See the image below for the clarification on which fields I am looking to extract.    
Hello, How do we schedule a CSV file as an attachment to the Email. What is the script that needs to be added in the source code in Splunk for a dashboard. I wanted to schedule the CSV file on wee... See more...
Hello, How do we schedule a CSV file as an attachment to the Email. What is the script that needs to be added in the source code in Splunk for a dashboard. I wanted to schedule the CSV file on weekly/monthly basis.  Kindly, let me know how we can do that?
Hi all,    Need some advice here. I have some logs that has the URL and the HTTP  response code.    Sample here  POST /abc/xyz 200 POST /abc/xyz 401 POST /abc/xyz 500   Is there a way for me to... See more...
Hi all,    Need some advice here. I have some logs that has the URL and the HTTP  response code.    Sample here  POST /abc/xyz 200 POST /abc/xyz 401 POST /abc/xyz 500   Is there a way for me to do a search so that I can compare today count by response code, and alert if the count exceeds the 30 days average count of the response code?   Thanks
Greetings, We have a Splunk Environment with 3 Search Head in the SHC. We try to perform an ldapsearch command using the SA-LDAPsearch 3.0.2 add-on. The search takes a devastating 18-19 seconds to... See more...
Greetings, We have a Splunk Environment with 3 Search Head in the SHC. We try to perform an ldapsearch command using the SA-LDAPsearch 3.0.2 add-on. The search takes a devastating 18-19 seconds to load on the first and third Search Heads but on the second one it takes 3-4 seconds. We inspected the job and saw that according to the search.log the second SH indeed takes milliseconds between each action meanwhile the other two take 2-3 seconds between each internal step. We tried to speed up the ldapsearch with the "attrs" and "basedn" settings but even though it helped a little bit, 19 seconds is still too much time... The three search heads have identical resources and settings. What can be the cause of this major difference and what can I do to speed-up the ldapsearch or in what way can I debug it better?   Thanks, OmerShira
My requirement is like I need to create two panels in my dashboard. First Panel: When I am choosing last 15 min means I need to get the values from 10 am to 10.15am for today's date(for example). S... See more...
My requirement is like I need to create two panels in my dashboard. First Panel: When I am choosing last 15 min means I need to get the values from 10 am to 10.15am for today's date(for example). Second Panel: In second Panel I need to get the data from 10am to 10.15 am for yesterday's date.   It should be like comparison between today's data vs yesterday's data.please help me how to frame query for second Panel?