All Topics

Top

All Topics

Hi, I am using tstats to search the Network Datamodel for outbound SMB traffic (port 445) to external IP address ranges. Why are local IP ranges still appearing in my search results? Here is my... See more...
Hi, I am using tstats to search the Network Datamodel for outbound SMB traffic (port 445) to external IP address ranges. Why are local IP ranges still appearing in my search results? Here is my syntax:     | tstats summariesonly=t fillnull_value="MISSING" count from datamodel=Network_Traffic.All_Traffic where All_Traffic.dest_port="445" AND NOT All_Traffic.dest IN ("10.0.0.0/8","172.16.0.0/16","192.168.0.0/24") earliest=-15m latest=now by _time, All_Traffic.dest, All_Traffic.dest_port,All_Traffic.src, All_Traffic.src_port, All_Traffic.action, All_Traffic.bytes, index, sourcetype     Screenshot: I believe I have filtered them correctly, but hmm...
Hi everyone. I have followed the documentation for setting up TLS for inter-Splunk communication with self-signed certificates. I have a small test environment that has an SH, an Indexer and an U... See more...
Hi everyone. I have followed the documentation for setting up TLS for inter-Splunk communication with self-signed certificates. I have a small test environment that has an SH, an Indexer and an UF.  However, I get the following error:   03-15-2023 01:23:39.475 +0000 ERROR TcpInputProc [2605538 FwdDataReceiverThread] - Error encountered for connection from src=10.0.0.4:45088. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol     I have created the following certificates and keys based on the Splunk documentation:  myCertAuthCertificate.csr myCertAuthCertificate.pem myCertAuthCertificate.srl myCertAuthPrivateKey.key myServerCertificate.csr myServerCertificate.pem myServerPrivateKey.key mySplkCliCert.pem <- this is the concatenated file.   I copy the myCertAuthCertificate.pem and mySplkCliCert.pem files from the SH to the Indexer.   on the SH and Indexer, I edit the Server.conf to have the following:     [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/myCertAuthCertificate.pem serverCert = /opt/splunk/etc/auth/mycerts/mySplkCliCert.pem sslPassword = *****     What am I doing wrong?
.
We have a notification service that has a series of four services, a web API, a fanout service that converts submitted multiple-recipient, multiple-delivery-method notifications into multiple notific... See more...
We have a notification service that has a series of four services, a web API, a fanout service that converts submitted multiple-recipient, multiple-delivery-method notifications into multiple notifications with just one recipient and one delivery method, and then delivery and retry services. Based on logging to Splunk as each notification is processed by each service (so states of "submitted" "fanned out", "delivered" and "pending retry"). The log events would have an ID associated with the notification, and the state that just completed. I am hoping to identify notifications that are missing states, like "submitted" appears as a logged event, but no others, or "submitted" and "fanned out", appear, but nothing else. Notifications expire, so bonus points if anyone can come up with a way to track "submitted", "fanned out", "pending retry", but stopped getting "pending retry" log events before the notification expired. "delivered" is of course the final state. Another way to think about this is looking for any "submitted" notification ID that does not have at least "fanned out" and "delivered". I'm willing to set aside the complexity of the one-to-many relationship for now, unless someone has idea(s) about that. In other words, if the submitted notification has 3 recipients and 2 delivery methods, that should become 6 notifications. I'd love to be able to track that properly too, and I could log additional data to facilitate it if needed.
Hello All - I need to be able to compare/graph regression test results from two different models.  The search command to create a table for one of the searches is: index="frontEnd" source="regres... See more...
Hello All - I need to be able to compare/graph regression test results from two different models.  The search command to create a table for one of the searches is: index="frontEnd" source="regress_rpt" pipeline="my_pipe" version="23ww10b" dut="*"  (testlist="*") (testName="*") status="*" | table cyclesPerCpuSec wall_hz testPath rpt This returns a table with 6 rows (As there are 6 tests per version). Is there a way to compare the cyclesPerCpuSec of this search to a new search which has a different version? I.e. index="frontEnd" source="regress_rpt" pipeline="my_pipe" version="23ww10a" dut="*"  (testlist="*") (testName="*") status="*" | table cyclesPerCpuSec wall_hz testPath rpt Thanks, Pip  
Is there a method in which a playbook can be configured to add the tag to the artifact and not the whole container. We are running Splunk SOAR 5.0.1 on prem.  The playbook logic works but the only i... See more...
Is there a method in which a playbook can be configured to add the tag to the artifact and not the whole container. We are running Splunk SOAR 5.0.1 on prem.  The playbook logic works but the only issue is the entire container gets tagged. 
Hi Splunkers, I’m working on a report panel in a dashboard where I need to show the difference of two fields in colors. Any one help me in doing it in a right way using xml?  For example: N... See more...
Hi Splunkers, I’m working on a report panel in a dashboard where I need to show the difference of two fields in colors. Any one help me in doing it in a right way using xml?  For example: Nov       Feb 10           5 20           10 30            40 If it’s less in Feb then it should show as green, if not, it should be in red. How can I do this? can we do this using xml instead of Javascript?
After upgrade to 9.x, higher cpu utilization.
In the splunkd.log I can see the following:  "Local KV Store has replication issues. See introspection data and mongod.log for details. Cluster has not been configured on this member. KVStore clus... See more...
In the splunkd.log I can see the following:  "Local KV Store has replication issues. See introspection data and mongod.log for details. Cluster has not been configured on this member. KVStore cluster has not been configured." If I check the kvstore-status it just says the kvstore status is down for this new member.  The normal shcluster-status shows as UP and the kvstore as ready however for this new member.  Not sure what I can do to try and force the kvstore to initialize?  I have tried shutting splunk down on the new member and doing a kvstore-clean and restarting but it still isn't taking. Splunk version 9.0.0   Any thoughts on what else I can try?
I created a summary index with a custom _raw from a tstats search from 03/14/2023 16:30:00 to 03/14/2023 16:35:00: | tstats summariesonly=false count sum(common.sentbyte) AS sentbyte sum(common.rc... See more...
I created a summary index with a custom _raw from a tstats search from 03/14/2023 16:30:00 to 03/14/2023 16:35:00: | tstats summariesonly=false count sum(common.sentbyte) AS sentbyte sum(common.rcvdbyte) AS rcvdbyte FROM datamodel=CTTI_Fortinet_Log WHERE common.subtype=forward BY common.devname common.dstip common.sessionid | rename common.devname as devname common.dstip as dstip common.sessionid as sessionid | addinfo | eval _time = strftime(info_min_time,"%m/%d/%Y %H:%M:%S %z") | eval version=0.44 | eval _raw=_time . ", " . "devname=". devname . ", " . "dstip=" . dstip . ", " . "sessionid=" . sessionid . ", " . "sentbyte=" . coalesce(sentbyte,0) . ", " . "rcvdbyte=" . coalesce(rcvdbyte,0) . ", " . "version=" . version | fields _raw | collect index=superbasket_d_test addtime=f It worked as intended, showing me the correct extracted _time and _raw in the collect query results but then when I search that same index for some reason it adds 1 second to _time.   Why does it happen?
Hi,  I have onboarded data via DBConnect through Rising Column for which we have configured the Risinig Column value as RS_LAST_MAINTENANCE_TIMESTAMP which is the default Time field. But in the dash... See more...
Hi,  I have onboarded data via DBConnect through Rising Column for which we have configured the Risinig Column value as RS_LAST_MAINTENANCE_TIMESTAMP which is the default Time field. But in the dashboard we have filtering the month wise apps count based on APPLICATION_CRT_DT which has no timestamp. Issue is if we search data for last 7 days, Jan month data is also populating as that particular app is created on Jan month and updating values in last 7 days. so, written "where" condition like below which is not working in all cases(working only when searching since "date",applying epoc time for the below where condition and getting accurate results, but when searching for last 7 days or 24 hrs or all time, that parameter is passing as -7d@d and getting error as invalid). Kindly help on this <input type="time" token="datefield" <default> <earliest>0</earliest> <latest>now</latest> <row> <table> <search> <query>index=* source=tablename |eval Total_Apps=if(match('Type',"NTB"),"1","0") |eval Date=strptime(APPLICATION_CRT_DT,"%Y-%m-%d %H:%M:%S") |where Date&gt;=$datefield.earliest$ OR Date&tl;=$datefield.latest$ |eval Mon-Year=strftime(strptime(APPLICATION_CRT_DT,"%Y-%m-%d %H:%M:%S"),%b-%Y) |stats sum(Total_Apps) as "Total Apps" by Mon-Year
Hi, Here is my Data in 2 logs having 3 fields Log1 :  AccountName books bought bookName ABC 4 book1, book2, book3, book1 DEF 3 book1, book2, book2 MNO 1 ... See more...
Hi, Here is my Data in 2 logs having 3 fields Log1 :  AccountName books bought bookName ABC 4 book1, book2, book3, book1 DEF 3 book1, book2, book2 MNO 1 book3   Log 2 : AccountName books sold bookName ABC 1 book3 DEF 2 book2, book2 MNO 1 book3     Result I want : AccountName Total Books bookName bought sold ABC 4 book1 book2 book3 2 1 1 0 0 1 DEF 3 book1 book2 1 2 0 2 MNO 1 book3 1 1             Can anyone please help me in this. 
I have a lookup file of HostNames HostName Host1 Host2 Host3 Host4 Host5   I would like to create a search to include events that are only from these h... See more...
I have a lookup file of HostNames HostName Host1 Host2 Host3 Host4 Host5   I would like to create a search to include events that are only from these hostnames listed in my lookup file.  How do I do this.? Which "host" field matches the "Hostname" field in my lookup file. An example would be, I am looking for which of these host that are sending windows security logs or not. I know all these systems should be, but some are not, and I want to know which ones are and which one are not using the lookup file.
Hi everyone I got the following sample search that yields the table below. index=server | stats avg(response_time) by server_name | sort + avg(response_time) | streamstats count as rank | hea... See more...
Hi everyone I got the following sample search that yields the table below. index=server | stats avg(response_time) by server_name | sort + avg(response_time) | streamstats count as rank | head 3 rank server_name avg(response_time) new_performance_metric 1 best.server 300   2 second.best.server 350   3 third.best.server 400     Once I know the top servers, I want to calculate additional new_performance_metric for each of the three servers. Does anyone know how this can be done? Note: - I can't use foreach since the metric I want to calculate involves streaming commands. Foreach does not support that. - I think I can't use a subsearch since it is executed first where the top servers are not known yet. - I can't precompute the new_performance_metric for all servers and then use something like a lookup since this is computationally too expensive. My guess is that the solution involves a macro but I couldn't figure it out yet. Many thanks in advance.
Join us for an AppDynamics Cloud on-demand webinar The experience economy has led to a boom in multi-cloud deployments. But it’s also introduced new data challenges that hinder full-stack observab... See more...
Join us for an AppDynamics Cloud on-demand webinar The experience economy has led to a boom in multi-cloud deployments. But it’s also introduced new data challenges that hinder full-stack observability. How can organizations overcome multi-cloud data silos? Experts from AppDynamics and CloudFabrix will explore why data-centric AIOps is key — and share strategies for optimizing multi-cloud environments. ​​When is the webinar and how do I attend?  WEBINAR Why the experience economy needs data-centric AIOps DATES & TIMES AMER | March 21, 11am PST / 2pm EST APAC | March 22, 8:30am IST / 11am SGT / 1pm AEST REGISTER Register here now! Presenters Gregg Ostrowski, is Executive CTO at Cisco AppDynamics and a thought leader with over 25 years in tech leadership positions with responsibility for enterprise services, developer relations, sales engineering. He helps companies succeed with digital transformations, mobility application deployments, DevOps strategies, analytics and high-ROI business solutions. Shailesh Manjrekar, Chief Marketing Officer, CloudFabrix, is a seasoned IT professional with over two decades of experience building and managing emerging global businesses. He brings an established background in providing effective product and solutions marketing, product management and strategic alliances spanning AI and deep learning, FinTech and Life Sciences SaaS solutions. James Schneider is a Solution Architect at Cisco AppDynamics with over 20 years of IT experience in various verticals, including transportation, healthcare, finance, government and telecommunications. With a focus on helping organizations monitor, analyze and optimize their applications for maximum performance and efficiency, James has developed expertise in application development, performance monitoring and full stack observability. 
Fairly new Splunk user here looking for Linux auditing solutions.  I am running a disconnected version of Splunk Enterprise and thus cannot make use of the content pack which replaced the application... See more...
Fairly new Splunk user here looking for Linux auditing solutions.  I am running a disconnected version of Splunk Enterprise and thus cannot make use of the content pack which replaced the application and add-on according to SplunkBase.  Am I still able to use the archived applications and add-on?  Realistically I am seeking a solution that would allow me to configure the universal forwarders I'm using to send the appropriate data so I can create queries via the linux_secure sourcetype.
Hello everyone,   I know it's possible to remove things from Splunk search that are older than two years, for example. If I apply this setting, space is not freed on the system disk where Splunk ... See more...
Hello everyone,   I know it's possible to remove things from Splunk search that are older than two years, for example. If I apply this setting, space is not freed on the system disk where Splunk is installed. Therefore, I am asking for information on how to delete data older than two years from Splunk DB, so as to free up space on the system disk. Is it even possible? Thank you.   Best Regards, DCUsupport
I have the following situation: I have an universal forwarder that were sent logs to (HF1 and index=idx1) Could you provide suggestions on how to configure this universal forwarder (UF) to send log... See more...
I have the following situation: I have an universal forwarder that were sent logs to (HF1 and index=idx1) Could you provide suggestions on how to configure this universal forwarder (UF) to send logs to both (HF1 and index=idx1) and (HF2 and index=idx2)? Any insights or advice would be appreciated. Thank you.  
Hello Splunkers!! I have qlick view search. And I want to use same kind of search in Splunk. Please help me how can I arrange or use below qlick view search in Splunk ? =if(sum(ShuttlePareto.Tech... See more...
Hello Splunkers!! I have qlick view search. And I want to use same kind of search in Splunk. Please help me how can I arrange or use below qlick view search in Splunk ? =if(sum(ShuttlePareto.Technical) / sum(IF(JobStatusKey='Finished', Throughput.RecordCounter)) <= 0, round(0.000001,0.01),     if(sum(ShuttlePareto.Technical) / (sum(IF(JobStatusKey='Finished', Throughput.RecordCounter))  + 0.000001) > 1, 100, round(sum(ShuttlePareto.Technical) / sum(IF(JobStatusKey='Finished', Throughput.RecordCounter)) * 100, 0.01)     ) )   
Hi all, I require access to the CLI and am using splunk Enterprise AMI, any help would be apperacited.  Alternatively if anyone has any ideas on how I can do the following It would be greatly gr... See more...
Hi all, I require access to the CLI and am using splunk Enterprise AMI, any help would be apperacited.  Alternatively if anyone has any ideas on how I can do the following It would be greatly greatly appreactited. I have a large amount of PCAP files for ingestion by splunk, there seems to be a file size limit when uploading my merged PCAPS so i am left with the problem of trying to upload 1000+ PCAPS which would be a painstaking long process done manually, a workaround is through the CLI however I can not access it. This is for a university project and any help would be appreciated, thanks for reading!