All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have an issue where logs contain timestamps in zulu and the server uses local time for its index.  I need to calculate delays now since the logs are initially written on vendor servers and there ca... See more...
I have an issue where logs contain timestamps in zulu and the server uses local time for its index.  I need to calculate delays now since the logs are initially written on vendor servers and there can be between 0 delay or over an hour behind. I can't find any functions that do this and am not having luck either converting a timestamp from one timezone to another or converting timezones to seconds to do any math on. timestamp log was written   2020-07-22T12:59:12.301063Z   timezone log was indexed from _time   -0400   I am not an admin and have no control or influence over the servers or its configuration. Is there an easy way to do this inline in the query?
I'm using agent v20.3 and CXF 3.2.12. AppD isn't identifying individual web services on the front end.  Instead, it's lumping them all in to a single business transaction named after the url like "/... See more...
I'm using agent v20.3 and CXF 3.2.12. AppD isn't identifying individual web services on the front end.  Instead, it's lumping them all in to a single business transaction named after the url like "/myapp/services."  The java auto discovery rule has "web service transaction monitoring enabled" and "discover transactions automatically for all web service requests" both checked. What else can I do differentiate the individual services? thanks
So I have an  spl query, that does these things: -gets all the values from index=rds_db that is equal to transfer_status to failure -passes all the field values (those eval fields) to service now... See more...
So I have an  spl query, that does these things: -gets all the values from index=rds_db that is equal to transfer_status to failure -passes all the field values (those eval fields) to service now to create an incident ticket -|snowincidentalert creates tickets. (all those fields before the command are rendered as unusable)         -"Incident Number" , "Incident Link", "Correlation ID" are one of those fields that appears after the command -field mapping the Incident Number field to the number field from index=snow_incident -using regex to the description to get the fields needed to supply the email lookup and later the email integration (source_sys_name, target_sys_name) -using source_sys_name to map the email_group field which is on the email_lookup -creating a case condition that will match the group to the correct email - table the fields needed so to use this as a parameter for the email integration and send those emails  All of these query is located inside of an alert that will be triggered real-time. In our requirements, we need to be able to create new tickets. In my spl query,  I just indicated the correlation id so I will not be able to create new ticket and flood the service now db with tickets. My problem is if  I dont declare the correlation_id, it doesnt match the incident number that the |snowincidentalert have given.  All I know that was working is up to the |rename "Incident Number" as number. After that, it doesn't show any results. p.s the email alert integration works fine also. It just doesn't give me an email if I remove the correlation_id ----------------------------------- index="rds_db" | eval D1=if(transfer_status="Succesful transfer of file from EKS", "Success", "Failure") | where D1="Failure" | rename interface_id as "Service ID", priority as "Priority", source_sys_name as "Source", target_sys_name as "Target", integration_name as IntegrationName | table "Service ID", "Service Name", "Priority", "Source", "Target", "D1", IntegrationName | eval state="1" | eval configuration_item=Source | eval cmdb_ci=Source | eval contact_type="Splunk ServiceNow Add-on" | eval assignment_group=Source | eval category="Application Software" | eval subcategory="File_Data_Report" | eval impact="2" | eval urgency="2" | eval priority="2" | eval short_description="No ".IntegrationName." Received" | eval custom_fields="u_company=testCompany||comments=Here is my comment||description=".Source.": No ".IntegrationName." Received on [Event Date] by ".Target | eval account="ServiceNow_account" | eval correlation_id="bda390dfaf3243328a8994022b45d7a3" |snowincidentalert | rename "Incident Number" as number | join number [search index=snow_incident] | rex field=dv_description "(?<source_sys_name>.+): No (?<integration_name>.+) Received on \[Event Date\] by (?<target_sys_name>.+)" | table dv_description number dv_assignment_group source_sys_name target_sys_name integration_name | lookup email_lookup email_group as source_sys_name OUTPUT email_group | eval email_group_address_source=case(email_group=="NCTracks", "testNCTracks@gmail.com",email_group=="PHP-AMHC", "testNCTracks@gmail.com", email_group=="testPHP-BCBS@gmail.com", "testPHP-BCBS@gmail.com",email_group=="Analytics", "testAnalytics@gmail.com",email_group=="Enrollment Broker", "testEnrollmentBroker.@gmail.com") | lookup email_lookup email_group as target_sys_name OUTPUT email_group | eval email_group_address_target=case(email_group=="NCTracks", "testNCTracks@gmail.com",email_group=="PHP-AMHC", "testNHP-AMHC@gmail.com", email_group=="testPHP-BCBS@gmail.com", "testPHP-BCBS@gmail.com",email_group=="Analytics", "testAnalytics@gmail.com",email_group=="Enrollment Broker", "testEnrollmentBroker.@gmail.com") | eval incident_link="https://acnncmeddemo.service-now.com/incident.do?sysparm_query=number=".number | table number incident_link source_sys_name target_sys_name email_group_address_source email_group_address_target -----------------------------------
Iam getting a Error: getaddrinfo ENOTFOUND input-prd-p-d4j7q.splunkcloud.com in postman when I try to send data to my splunk instance. I have tried all variations of urls in the documentation but the... See more...
Iam getting a Error: getaddrinfo ENOTFOUND input-prd-p-d4j7q.splunkcloud.com in postman when I try to send data to my splunk instance. I have tried all variations of urls in the documentation but they all give this error.
Hi As you know one of the latest vulnerability was CVE-2020-0688 on microsoft exchange server. so I'm trying free splunk on my lab environment and also install sysmon on microsoft exchange server a... See more...
Hi As you know one of the latest vulnerability was CVE-2020-0688 on microsoft exchange server. so I'm trying free splunk on my lab environment and also install sysmon on microsoft exchange server and copy my sysmon evtx file to splunk for inspection log to detect above vulnerability. but i am new in splunk and want the syntax of search regex to do this. please let me know how can i do? Regards, Mahdi
Hey all! We are currently running 7.3.1 splunk enterprise (Windows) on our system, I just recently ran into an issue. When I go to check on information of one of our servers. I get this error massag... See more...
Hey all! We are currently running 7.3.1 splunk enterprise (Windows) on our system, I just recently ran into an issue. When I go to check on information of one of our servers. I get this error massage stating 'Could not load lookup=LOOKUP-audit01'. Now I've done research already and went into settings-> Lookups-> Lookup Definitions and searched for audit01. Now that search informed me of the lookup file being used and app that it is associated with. I went into my files and discovered that the audit01.csv does exist in the location it is stating. So, I would think there would be no issue for it to find and load it. Does anyone have any other ideas I am missing?
While compiling and installing the Splunk phantom Application which I have developed, I am getting an error with error code: C901 and message: is too complex (33). What I have made the mistake and h... See more...
While compiling and installing the Splunk phantom Application which I have developed, I am getting an error with error code: C901 and message: is too complex (33). What I have made the mistake and how to fix the issue.
Hi, I've got a setup where my universal forwarder clients are going to submit logs to a Splunk index instance going through a L4 load balancer. I'd like the communication between the universal forw... See more...
Hi, I've got a setup where my universal forwarder clients are going to submit logs to a Splunk index instance going through a L4 load balancer. I'd like the communication between the universal forwarders and the balancer to be encrypted. My setup would be something like: UF > TLS LB > TCP input on the Splunk index How can I enable the outputs on the UF side to be sent over TLS1.2 without the client certificate validation phase? I did use a setting like useSSL = true on my forwarder. According to this snippet of the outputs.conf  configuration page (https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Outputsconf) it should enable just the encrypted outgoing stream without requiring a client certificate (as in "legacy" mode): ----Secure Sockets Layer (SSL) Settings---- To set up SSL on the forwarder, set the following setting/value pairs. If you want to use SSL for authentication, add a stanza for each receiver that must be certified. useSSL = <true|false|legacy> * Whether or not the forwarder uses SSL to connect to the receiver, or relies on the 'clientCert' setting to be active for SSL connections. * You do not need to set 'clientCert' if 'requireClientCert' is set to "false" on the receiver. * If set to "true", then the forwarder uses SSL to connect to the receiver. * If set to "false", then the forwarder does not use SSL to connect to the receiver. * If set to "legacy", then the forwarder uses the 'clientCert' property to determine whether or not to use SSL to connect. * Default: legacy  As universal forwarder client I'm using the latest Docker image provided by splunk and I push an outputs.conf to it using the deployment service. The outputs.conf look like: [tcpout] defaultGroup=tcpin [tcpout:tcpin] useSSL = true sslVersions = tls1.2 useClientSSLCompression = true server=my_lb_dns_name:9997   From the container I'm able to reach the LB with the following command: sudo -u splunk LD_LIBRARY_PATH=./lib ./bin/openssl s_client -connect my_lb_dns_name:9997 But in the splunkd.log I see warings like: WARN TcpOutputProc - Cooked connection to ip=10.235.106.194:9997 timed out   Can someone helm figure out what I'm missing? Thanks, Giuseppe    
I have setup a deployment server which manages multiple forwarders. All my instances run with user splunk. When I try to push apps from deployment server to forwarders the owner of the app reflects ... See more...
I have setup a deployment server which manages multiple forwarders. All my instances run with user splunk. When I try to push apps from deployment server to forwarders the owner of the app reflects as "root" which should be splunk otherwise the app doesn't works. It is very inconvenient to login to each forwarder and change the owner to splunk and restart the service again after the app is pushed. Any solution for this?
Hi All,   We have a utility in ec2 linux servers (Ireland region)which fetches reports created in splunk. We have opened all the ports and allowed connections in EC2, still we are not able to get r... See more...
Hi All,   We have a utility in ec2 linux servers (Ireland region)which fetches reports created in splunk. We have opened all the ports and allowed connections in EC2, still we are not able to get results while running below command. telnet (org).splunkcloud.com openssl s_client -connect (org).splunkcloud.com:8089   ---whereas we have same setup in north virginia where it is working as expected Please let us know are we missing something  
Hi there, our current Splunk Installation contains a indexer cluster with 2 nodes and 1 searchhead which also has the cluster master role, License role and deployment role included. I now added two... See more...
Hi there, our current Splunk Installation contains a indexer cluster with 2 nodes and 1 searchhead which also has the cluster master role, License role and deployment role included. I now added two new indexer peers to the existing cluster. So for the moment we have 4 indexer nodes in production. The main goal should be to leave only the 2 new indexers up and running. I have read in another thread that indexed data could not replicated to the new servers, so I need to wait until the retention period is reached. So far no Problem but I have a few questions about the current setup: - Do I have to change the Replication Factor /  Search Factor? The current setting is RF=2 / SF=2 - We created a couple of Server classes in our deployment server, each server class has it's own outputs.conf file, where I defined the tcpout through only the old indexer. Should I have to change that to the new indexers, so all the new data will directly go through this indexers? - Are there any other configuration files to modify to make sure new data will only go to the new indexers? - At the cluster overview webpage, I see that all indexers are searchable (4). Will Splunk searchhead automatically detect where to search for events?   At the splunk documentation I couldn't find much information about this topic/situation. So any information would be helpful. Thanks.  
Hi Team, I want to filter out the logs during the indexing level itself i.e. If the event comes with the following format as mentioned below with GET / - 111 then the logs should not be ingested ... See more...
Hi Team, I want to filter out the logs during the indexing level itself i.e. If the event comes with the following format as mentioned below with GET / - 111 then the logs should not be ingested into Splunk. So kindly help out with the props and transforms for the same. Sample Event: 2020-07-22 12:53:53 xx.xxx.xx.xx GET / - 111 - xx.xxx.x.xxx - - xxx x x xx  
For some months we have been having problems sending email alerts. The message is as follows:   2020-07-22 12:00:16,226 +0200 INFO sendemail:1146 - sendemail pdfgen_available = 1 2020-07-22 12:00:... See more...
For some months we have been having problems sending email alerts. The message is as follows:   2020-07-22 12:00:16,226 +0200 INFO sendemail:1146 - sendemail pdfgen_available = 1 2020-07-22 12:00:16,227 +0200 INFO sendemail:1286 - sendemail:mail effectiveTime=1595412000 2020-07-22 12:00:19,150 +0200 INFO sendemail:1306 - Generated PDF for email 2020-07-22 12:00:20,298 +0200 ERROR sendemail:137 - Sending email. subject="Splunk Report: License Usage PRODUCTION", results_link="https://xxxxx/app/search/@go?sid=scheduler__admin__search__RMD569643b83e5ae4406_at_1595412000_82981", recipients="[u'xxxx@xxxcom', u'xxxx@xxx.com']", server="smtp.office365.com:587" 2020-07-22 12:00:20,298 +0200 ERROR sendemail:458 - (554, '5.2.0 STOREDRV.Submission.Exception:SendAsDeniedException.MapiExceptionSendAsDenied; Failed to process message due to a permanent exception with message Cannot submit message. 0.35250:02016A81, 1.36674:0A000000, 1.61250:00000000, 1.45378:02000000, 1.44866:96510000, 1.36674:0E000000, 1.61250:00000000, 1.45378:9B510000, 1.44866:B4030000, 16.55847:67100000, 17.43559:0000000024020000000000000000000000000000, 20.52176:140F7B8C1B00103100000000, 20.50032:140F7B8C8B17000000000000, 0.35180:140F7B8C, 255.23226:0A007081, 255.27962:0A000000, 255.27962:0E000000, 255.31418:0A007181, 0.35250:00000000, 1.36674:0A000000, 1.61250:00000000, 1.45378:02000000, 1.44866:5E000000, 1.36674:32000000, 1.61250:00000000, 1.45378:63000000, 1.44866:01000000, 16.55847:CA000000, 17.43559:0000000070030000000000000100000000000000, 20.52176:140F7B8C1B00101053000000, 20.50032:140F7B8C8B1700006E2C0000, 0.35180:58000000, 255.23226:4800D13D, 255.27962:0A000000, 255.27962:32000000, 255.17082:DC040000, 0.27745:782C0000, 4.21921:DC040000, 255.27962:FA000000, 255.1494:7D2C0000, 0.38698:05000780, 0.37692:01000000, 0.37948:00400600, 5.33852:00000000534D545000040480, 7.36354:0100000000000109302E3331, 4.56248:DC040000, 7.40748:010000000000010B3A396133, 7.57132:00000000000000003932612D, 1.63016:32000000, 4.39640:DC040000, 8.45434:EC100AF9E504794BA48473AAD7EDE1BB3932612D, 5.10786:0000000031352E32302E333139352E3032373A414D36505230354D42363238303A39613361313639312D333266342D343932612D623232622D62303463366135373834316200201000000000, 7.51330:1B0F0A0B262ED80818000000, 0.39570:00000000, 1.55954:0A000000, 0.49266:02000000, 1.33010:0A000000, 2.54258:00000000, 0.40002:07000000, 1.56562:00000000, 1.64146:32000000, 1.33010:32000000, 2.54258:DC040000, 255.1750:AF000000, 255.31418:0A005D36, 0.22753:4F2E0000, 255.21817:DC040000, 0.64418:0A00F565, 4.39842:DC040000, 0.41586:B9000000, 4.60547:DC040000, 0.21966:852E0000, 4.30158:DC040000 [Hostname=AM6PR05MB6280.eurprd05.prod.outlook.com]')   When I use the sendemail command with the from option it works correctly, but dede alerts does not.   Can someone help me? Thanks a lot!!
Hi, I installed Barracuda Web Application Firewall, but while testing I encountered an error message. It appears that this view uses Advanced XML, which was removed from Splunk Enterprise ”. As well ... See more...
Hi, I installed Barracuda Web Application Firewall, but while testing I encountered an error message. It appears that this view uses Advanced XML, which was removed from Splunk Enterprise ”. As well as the version of splunk I am using is 8.0.1.
Not able to see my lookup while creating an automatic lookup. While creating an automatic lookup i am not able to see my lookups in the lookup dropdown. The permission of this lookup is set to glob... See more...
Not able to see my lookup while creating an automatic lookup. While creating an automatic lookup i am not able to see my lookups in the lookup dropdown. The permission of this lookup is set to global still no luck. Request you to please guide.
HI Team , i need to edit existing dashboard and need to display : time taken for 90, 97 and 99 percentile of transaction to show in dashboard. splunk logs look like: tranId=testi1234556 cid=TEST-... See more...
HI Team , i need to edit existing dashboard and need to display : time taken for 90, 97 and 99 percentile of transaction to show in dashboard. splunk logs look like: tranId=testi1234556 cid=TEST-VALIDATATION-20200946101122 appId=34567usxpnsow c.a.g.t.c.http.HttpConnection.receive log="External connection ran 283 ms"   Please help on this asap.
Hello, I think this might be simple but need some guidance. Any help would be really appreciated. I have a log and in which, I have to check the successful transmission for all countries. When som... See more...
Hello, I think this might be simple but need some guidance. Any help would be really appreciated. I have a log and in which, I have to check the successful transmission for all countries. When some transmission is failed, I have to show the reason or error for that country. Below is the sample data: Status--20/07/2020 12:18:15--CALC_RFS_TUE_PM--(KE)--0 : - Initializing Communications... Status--20/07/2020 12:18:15--CALC_RFS_TUE_PM--(KE)--0 : - Sending Sender Information... Status--20/07/2020 12:18:15--CALC_RFS_TUE_PM--(KE)--0 : - Sending Recipient Information... Status--20/07/2020 12:18:15--CALC_RFS_TUE_PM--(KE)--0 : - Sending Message... Status--20/07/2020 12:18:15--CALC_RFS_TUE_PM--(KE)--0 : - Transmission Complete Success--20/07/2020 12:19:10--CALC_RFS_TUE_PM--(MY)---2207217873 :ORA-00001: unique constraint (WIMS.PK_TB_TRN_FCST_DAILY) violated ORA-06512: at "WIMS.SP_BUILD_FCST", line 573 - ForeCast data committed successfully. Failed--20/07/2020 12:19:10--CALC_RFS_TUE_PM--(MY)---2207217873 :ORA-00001: unique constraint (WIMS.PK_TB_TRN_FCST_DAILY) violated ORA-06512: at "WIMS.SP_BUILD_FCST", line 573 - RFS calculation failed Trace--20/07/2020 12:19:10--CALC_RFS_TUE_PM--(MY)---2207217873 :ORA-00001: unique constraint (WIMS.PK_TB_TRN_FCST_DAILY) violated ORA-06512: at "WIMS.SP_BUILD_FCST", line 573 - Connecting to SMTP server for attempt:1 Status--20/07/2020 12:19:10--CALC_RFS_TUE_PM--(MY)---2207217873 :ORA-00001: unique constraint (WIMS.PK_TB_TRN_FCST_DAILY) violated ORA-06512: at "WIMS.SP_BUILD_FCST", line 573 - Connecting to SMTP Server (notesGWEUR.MICHELIN.com )... Status--20/07/2020 12:19:10--CALC_RFS_TUE_PM--(MY)---2207217873 :ORA-00001: unique constraint (WIMS.PK_TB_TRN_FCST_DAILY) violated ORA-06512: at "WIMS.SP_BUILD_FCST", line 573 - Initializing Communications...   Here "KE" and "MY" are the countries. I have tried like below, but giving errors. | makeresults | eval Possible_ORs="AU,MY,KE,JP,SI,VN,ID,TD,KO,J1" | eval Possible_ORs=split(Possible_ORs, ",") | mvexpand Possible_ORs | eval count=0 | rename Possible_ORs as "ORs" | fields - _time | append [| search sourcetype=RFS_Log | rex "Status\W+(?P<Date>\d{1,2}\/\d{1,2}\/\d+\s+\d+\W+\d+\W+\d+).*(?P<OR_NAME>[A-Z]{2}).*Transmission\s+Complete" | eval Date=strftime(Date, "%Y-%m-%d %H:%M:%S") | eval ORs=OR_NAME | eval ORs = split(ORs,",") | mvexpand ORs | eval count=1 | fields - _raw _time] | dedup ORs sortby - count | eval Job Name=case(count>=0, "RFS Calculation") | eval Status=case(count>0, "Calculation Successful", count=0, "Calculation Failed") | eval Status=if(isnull(Status), "Calculation Failed", Status) | eval Reason=if(Status="Calculation Failed", [search sourcetype=RFS_Log | rex "Status\W+\d{1,2}\/\d{1,2}\/\d+\s+\d+\W+\d+\W+\d+.*(?P<OR_NAME>[A-Z]{2}).*violated"], "failed")
Hi, We have PKI infra using root and intermediate certificate servers    I have setup SSL on server.conf and web.conf . using the same pem cert  private key doesnt have password protection  ... See more...
Hi, We have PKI infra using root and intermediate certificate servers    I have setup SSL on server.conf and web.conf . using the same pem cert  private key doesnt have password protection    web.conf  [settings] privKeyPath = /opt/splunk/etc/auth/mycerts/server.key serverCert = /opt/splunk/etc/auth/mycerts/server.pem enableSplunkWebSSL = true httpport = 443 server.conf  [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/root.pem serverCert = /opt/splunk/etc/auth/mycerts/server.pem sslPassword = I am also using ldap integration over ssl  when i enable sslconfig on server.conf I start getting slow splunk web and 500 internal errors  when I disable sslConfigs Splunk web works find and my certificates are being recognized on the web browser    Can you advise on what could be the cause of this behavior  checking the logs I see the below Errors  from splunkd.log  07-22-2020 09:33:51.954 +0200 ERROR ExecProcessor - message from "/opt/splunk/bin/python2.7 /opt/splunk/etc/apps/splunk_monitoring_console/bin/dmc_config.py" Socket error communicating with splunkd (error=('_ssl.c:726: The handshake operation timed out',)), path = /services/shcluster/config?output_mode=json   from web-service.log    2020-07-22 09:35:57,816 ERROR [5f17ec3fc77f08942c2710] __init__:522 - Socket error communicating with splunkd (error=_ssl.c:1074: The handshake operation timed out), path = /services/server/info 2020-07-22 09:35:57,817 INFO [5f17ec3fc77f08942c2710] startup:139 - Splunk appserver version=UNKNOWN_VERSION build=000 isFree=False isTrial=True 2020-07-22 09:35:57,818 INFO [5f17ec3fc77f08942c2710] decorators:272 - require_login - no splunkd sessionKey variable set; request_path=/en-US/ 2020-07-22 09:35:57,818 INFO [5f17ec3fc77f08942c2710] decorators:280 - require_login - redirecting to login 2020-07-22 09:36:27,994 ERROR [5f17ec5df57f08942c8510] __init__:522 - Socket error communicating with splunkd (error=_ssl.c:1074: The handshake operation timed out), path = /services/server/info      
Hello, Let me give you an example. I've got the following table to work with: src_group dest_group count A B 10 B A 21 A C 32 B Z 6   I'd like to have something like t... See more...
Hello, Let me give you an example. I've got the following table to work with: src_group dest_group count A B 10 B A 21 A C 32 B Z 6   I'd like to have something like this for result: group src_count dest_count A 42 21 B 27 10 C 0 32 Z 0 6   As you can see, I have now only one colomn with the groups,  and the count are merged by groups while the direction (src or dest) is now on the counts : we sum the count for each group depending of whether the group was the source or the destination in the first table.   Any clue?   Edit: to help me, you can use this search that will generate the first table: | stats count | eval src_group="A", dest_group="B", count=10 | append [| stats count | eval src_group="B", dest_group="A", count=21 ] | append [| stats count | eval src_group="A", dest_group="C", count=32 ] | append [| stats count | eval src_group="B", dest_group="Z", count=6 ]
Hello Everyone! I have a scenario to extract a particular set id's from index1 in search1 and run a search2 on index2 based on the extracted ids. Example : Search1: index="index1" sourcetype="st1... See more...
Hello Everyone! I have a scenario to extract a particular set id's from index1 in search1 and run a search2 on index2 based on the extracted ids. Example : Search1: index="index1" sourcetype="st1" field1="abc" |rename id as ticket_id Search2: index="index2" source="xyz" | sort 0 ticket_id |......... What's the best way to go about it? I tried using map but I've had no luck at all. Not sure if it's because I'm using it wrong or if it's not appropriate for the situation. Including both indexes at the start of the search is not feasible given the absurd size of the second index. Can anyone please help me here? Thank you in advance.