All Topics

Top

All Topics

Hello, I have this bar graph with a static bar to show a deadline. Needed to change the little bar position dynamically when the dates changes. Is that possible?  
hi, below query is used for the drill down used for my line graph. | savedsearch XYZ | eval Deactivated = strftime(strptime(TO_DATE, "%Y-%m-%d %H:%M:%S.%N"), "%B-%y") | eval Created = strftime(str... See more...
hi, below query is used for the drill down used for my line graph. | savedsearch XYZ | eval Deactivated = strftime(strptime(TO_DATE, "%Y-%m-%d %H:%M:%S.%N"), "%B-%y") | eval Created = strftime(strptime(FROM_DATE, "%Y-%m-%d %H:%M:%S.%N"), "%B-%y") | where $apps$ and $bscode$ and $function$ and $dept$ and $country$ and $emp_type$ | search $usertype|s$ = $monthname|s$ | table Function, BS_ID, APP_NAME, MUID, FIRST_NAME, LAST_NAME, FROM_DATE, TO_DATE, LASTLOGON, COUNTRY, CITY, DEPARTMENT_LONG_NAME, "Business Owner", SDM, "System Owner", "Validation Owner" the above query looks like this in the search panel: | savedsearch hourradata2 | eval Deactivated = strftime(strptime(TO_DATE, "%Y-%m-%d %H:%M:%S.%N"), "%B-%y") | eval Created = strftime(strptime(FROM_DATE, "%Y-%m-%d %H:%M:%S.%N"), "%B-%y") | where like (APP_NAME ,"Managed iMAP Application") and like (BS_ID,"%") and like (Function,"%") and like (DEPARTMENT_LONG_NAME,"%") and like (COUNTRY,"%") and like(EMPLOYEE_TYPE,"%") | search "Active" = "June-23" | table Function, BS_ID, APP_NAME, MUID, FIRST_NAME, LAST_NAME, FROM_DATE, TO_DATE, LASTLOGON, COUNTRY, CITY, DEPARTMENT_LONG_NAME, "Business Owner", SDM, "System Owner", "Validation Owner" if the highlighted one's removed then the query gives result. 
Trust everyone is doing good, I noticed for a particular application being monitored the total response time being displayed on appdynamics is not the true value,it shows a lower response time that ... See more...
Trust everyone is doing good, I noticed for a particular application being monitored the total response time being displayed on appdynamics is not the true value,it shows a lower response time that just isnt possible. Checking individual transaction response time it displays the true value but the total is just wrong What could be he cause of this
  Dataframe row : {"_c0":{"0":"[","1":" {","2":" \"table_name\": \"pc_dwh_rdv.gdh_ls2lo_s99\"","3":" \"deleted_count\": 18","4":" \"redelivered_count\": 0","5":" \"load_date\": \"2023-07-27\"","6":... See more...
  Dataframe row : {"_c0":{"0":"[","1":" {","2":" \"table_name\": \"pc_dwh_rdv.gdh_ls2lo_s99\"","3":" \"deleted_count\": 18","4":" \"redelivered_count\": 0","5":" \"load_date\": \"2023-07-27\"","6":" }","7":" {","8":" \"table_name\": \"pc_dwh_rdv.gdh_spar_s99\"","9":" \"deleted_count\": 8061","10":" \"redelivered_count\": 1","11":" \"load_date\": \"2023-07-27\"","12":" }","13":" {","14":" \"table_name\": \"pc_dwh_rdv.gdh_tf3tx_s99\"","15":" \"deleted_count\": 366619","16":" \"redelivered_count\": 0","17":" \"load_date\": \"2023-07-27\"","18":" }","19":" {","20":" \"table_name\": \"pc_dwh_rdv.gdh_wechsel_s99\"","21":" \"deleted_count\": 2","22":" \"redelivered_count\": 0","23":" \"load_date\": \"2023-07-27\"","24":" }","25":" {","26":" \"table_name\": \"pc_dwh_rdv.gdh_revolvingcreditcard_s99\"","27":" \"deleted_count\": 1285","28":" \"redelivered_count\": 0","29":" \"load_date\": \"2023-07-27\"","30":" }","31":" {","32":" \"table_name\": \"pc_dwh_rdv.gdh_phd_s99\"","33":" \"deleted_count\": 2484","34":" \"redelivered_count\": 204","35":" \"load_date\": \"2023-07-27\"","36":" }","37":" {","38":" \"table_name\": \"pc_dwh_rdv.gdh_npk_s99\"","39":" \"deleted_count\": 1705","40":" \"redelivered_count\": 0","41":" \"load_date\": \"2023-07-27\"","42":" }","43":" {","44":" \"table_name\": \"pc_dwh_rdv.gdh_npk_s98\"","45":" \"deleted_count\": 1517","46":" \"redelivered_count\": 0","47":" \"load_date\": \"2023-07-27\"","48":" }","49":" {","50":" \"table_name\": \"pc_dwh_rdv.gdh_kontokorrent_s99\"","51":" \"deleted_count\": 12998","52":" \"redelivered_count\": 0","53":" \"load_date\": \"2023-07-27\"","54":" }","55":" {","56":" \"table_name\": \"pc_dwh_rdv.gdh_gds_s99\"","57":" \"deleted_count\": 13","58":" \"redelivered_count\": 0","59":" \"load_date\": \"2023-07-27\"","60":" }","61":" {","62":" \"table_name\": \"pc_dwh_rdv.gdh_dszins_s99\"","63":" \"deleted_count\": 57","64":" \"redelivered_count\": 0","65":" \"load_date\": \"2023-07-27\"","66":" }","67":" {","68":" \"table_name\": \"pc_dwh_rdv_gdh_monat.gdh_phd_izr_monthly_s99\"","69":" \"deleted_count\": 1315","70":" \"redelivered_count\": 0","71":" \"load_date\": \"2023-07-27\"","72":" }","73":"]"}}   The above is the sample message of an event which we have in splunk we want to extract the deleted count values like "1315", "57", "13" etc and add those values as a separate fields using rex command . Also from the above message we want to extract load_date value such as 2023-07-27 and add that value as a separate field. Please help us in this.
Hi  For some reason Appdynamics is not showing all the servers in dashboards. we can see all servers: - in tiers and nodes section -in Service endpoints section etc but I do not know why they do... See more...
Hi  For some reason Appdynamics is not showing all the servers in dashboards. we can see all servers: - in tiers and nodes section -in Service endpoints section etc but I do not know why they do not appear in the dashboard when they used to appear. Any idea where to start?
Hi Team, Is there is a possibility to monitor or capture metric of node/metric limit for a particular controller. Reason behind this is there are many controller and it would be great if we can have... See more...
Hi Team, Is there is a possibility to monitor or capture metric of node/metric limit for a particular controller. Reason behind this is there are many controller and it would be great if we can have common dashboard where we can monitor all controller limit as it is difficult to see on each admin.jsp page to see the limit 
Hi, I am trying to export data into Splunk using splunkhecexporter by Opentelemetry with TLS insecure_skip_verify=false. When I tried to send it into Splunk, it will shows an error message. After th... See more...
Hi, I am trying to export data into Splunk using splunkhecexporter by Opentelemetry with TLS insecure_skip_verify=false. When I tried to send it into Splunk, it will shows an error message. After that, I tried to add in the TLS certificate inside the configuration, it still shows the same error message. Following is the configuration I configured in Opentelemetry .yaml exporters: logging: verbosity: detailed splunk_hec: token: "SPLUNK HEC TOKEN" endpoint: "SPLUNK COLLECTOR URL" tls: ca_file: /etc/ssl/certs/cert.crt cert_file: /etc/ssl/certs/<Splunk_cert>.crt key_file: /etc/ssl/certs/<Splunk_Key>.key Following is the error message prompted. 2023-07-26T06:12:01.308Z info exporterhelper/queued_retry.go:433 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "metrics", "name": "splunk_hec", "error": "Post \"https://<SPLUNK_LINK>:<SPLUNK_PORT>/services/collector\": x509: certificate is not valid for any names, but wanted to match <SPLUNK DNS>", "interval": "7.402035261s"}   May I know what causes this to happen and how to solve this issue, please? Thank you.
Hello Team,   We have configured the standalone splunk kafka connect server with event hub kafka topic . But we are not able to fetch the logs from kafka topic to Splunk. Below are the our config... See more...
Hello Team,   We have configured the standalone splunk kafka connect server with event hub kafka topic . But we are not able to fetch the logs from kafka topic to Splunk. Below are the our configuration details.   standalone.properties ================================================================================== bootstrap.servers=<hostname>:9093 plugin.path=/opt/app/kafka/plugins,/opt/app/kafka/kafka_2.13-3.4.0/jre1.8.0_211 # unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs group.id=<group>   value.converter=org.apache.kafka.connect.json.JsonConverter key.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=false value.converter.schemas.enable=false internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false offset.flush.interval.ms=60000 consumer.security.protocol=SASL_SSL consumer.sasl.mechanism=OAUTHBEARER security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER consumer.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer //Kafka Serializer class for Kafka record values, we have set our message body to be String consumer.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer consumer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId="<token>" clientSecret="<token>" scope="https://<bootstrap_server>/.default"; consumer.sasl.oauthbearer.token.endpoint.url=https://login.microsoftonline.com/<values>/oauth2/v2.0/token consumer.sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler offset.storage.file.filename=/opt/app/kafka/run/offset_connectorservice_1 #consumer.auto.offset.reset=latest auto.offset.reset=latest consumer.group.id=<topic_group> sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler sasl.oauthbearer.token.endpoint.url=https://login.microsoftonline.com/<values>/oauth2/v2.0/token sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId="<token>" clientSecret="<token>" scope="https://<bootstrap_server>/.default"; access.control.allow.origin=* access.control.allow.methods=GET,OPTIONS,HEAD,POST,PUT,DELETE ================================================================================= Logs from standalone kafka ================================================================================ WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored. Jul 27, 2023 4:37:51 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored. Jul 27, 2023 4:37:51 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource will be ignored. Jul 27, 2023 4:37:51 AM org.glassfish.jersey.internal.Errors logErrors WARNING: The following warnings have been detected: WARNING: The (sub)resource method listLoggers in org.apache.kafka.connect.runtime.rest.resources.LoggingResource contains empty path annotation. WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation. WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation. [2023-07-27 04:37:51,213] INFO Started o.e.j.s.ServletContextHandler@252dc8c4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:921) [2023-07-27 04:37:51,213] DEBUG STARTED @14030ms o.e.j.s.ServletContextHandler@252dc8c4{/,null,AVAILABLE} (org.eclipse.jetty.util.component.AbstractLifeCycle:191) [2023-07-27 04:37:51,213] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:309) [2023-07-27 04:37:51,213] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:56) =================================================================================   splunksinkconnector.sh ================================================================================= #!/bin/bash curl localhost:8083/connectors -X POST -H "Content-Type: application/json" -d '{ "name": "splunk-asla-dev", "config": { "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector", "tasks.max": "1", "topics": "<topic>", "splunk.hec.uri": "https://<splunk_indexer>:8088", "splunk.hec.token": "<token>", "splunk.hec.ack.enabled": "true", "splunk.hec.raw": "true", "splunk.hec.track.data": "true", "splunk.hec.ssl.validate.certs": "false", "splunk.indexes": "index", "splunk.sourcetypes": "sourcetype", "splunk.hec.raw.line.breaker": "\n" } }' ===================================================================================   Below are the error messages, after running the sink connector,   ================================================================================ [2023-07-27 04:40:50,906] TRACE [sink_connector|task-0] [Consumer clientId=connector-consumer-sink_connector-0, groupId=stage-group] sslCiphers: closed 1 metric(s). (org.apache.kafka.common.network.Selector:269) [2023-07-27 04:40:50,906] TRACE [sink_connector|task-0] [Consumer clientId=connector-consumer-sink_connector-0, groupId=stage-group] clients: entering performPendingMetricsOperations (org.apache.kafka.common.network.Selector:213) [2023-07-27 04:40:50,906] TRACE [sink_connector|task-0] [Consumer clientId=connector-consumer-sink_connector-0, groupId=stage-group] clients: leaving performPendingMetricsOperations (org.apache.kafka.common.network.Selector:229) [2023-07-27 04:40:50,906] TRACE [sink_connector|task-0] [Consumer clientId=connector-consumer-sink_connector-0, groupId=stage-group] clients: closed 0 metric(s). (org.apache.kafka.common.network.Selector:269) [2023-07-27 04:40:50,906] INFO [sink_connector|task-0] [Principal=:74fafaf6-a0c9-4b8b-bd8f-397ccb3c1212]: Expiring credential re-login thread has been interrupted and will exit. (org.apache.kafka.common.security.oauthbearer.internals.expiring.ExpiringCredentialRefreshingLogin:95) [2023-07-27 04:40:50,907] TRACE [sink_connector|task-0] LoginManager(serviceName=kafka, publicCredentials=[SaslExtensions[extensionsMap={}]], refCount=0) released (org.apache.kafka.common.security.authenticator.LoginManager:157) [2023-07-27 04:40:50,907] TRACE [sink_connector|task-0] Removed metric named MetricName [name=version, group=app-info, description=Metric indicating version, tags={client-id=connector-consumer-sink_connector-0}] (org.apache.kafka.common.metrics.Metrics:568) [2023-07-27 04:40:50,907] TRACE [sink_connector|task-0] Removed metric named MetricName [name=commit-id, group=app-info, description=Metric indicating commit-id, tags={client-id=connector-consumer-sink_connector-0}] (org.apache.kafka.common.metrics.Metrics:568) [2023-07-27 04:40:50,907] TRACE [sink_connector|task-0] Removed metric named MetricName [name=start-time-ms, group=app-info, description=Metric indicating start-time-ms, tags={client-id=connector-consumer-sink_connector-0}] (org.apache.kafka.common.metrics.Metrics:568) [2023-07-27 04:40:50,907] INFO [sink_connector|task-0] App info kafka.consumer for connector-consumer-sink_connector-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:83) [2023-07-27 04:40:50,907] DEBUG [sink_connector|task-0] [Consumer clientId=connector-consumer-sink_connector-0, groupId=stage-group] Kafka consumer has been closed (org.apache.kafka.clients.consumer.KafkaConsumer:2425)   ===============================================================================   Please let us know what we are missing in this configuration .  
Hello Evryone i have a panel with piechart i want to remove the highlighted part in the panel its coming with the chart its not the title   
I am still trying to get my head around regular expressions in splunk, and would like to use regex that could parse the _raw data to create an extracted field with the contents that are between the s... See more...
I am still trying to get my head around regular expressions in splunk, and would like to use regex that could parse the _raw data to create an extracted field with the contents that are between the square brackets: _raw example data looks like this: 2023-07-26 15:11:16.932 [ engine1] [Error-1] INFO java.Exception: example text 2023-07-26 15:11:16.932 [ core2] [Thread-5] WARN java.Exception: example text 2 2023-07-26 15:11:16.932 [ main3] [Token-2] INFO java.Exception: example text 3 2023-07-26 15:11:16.932 [ Job4] [Thread-1] WARN java.Exception: example text 4 I need to extract field that is based on the data between the first square brackets. If I need another field that is based on teh second square brackets. So, I would like the results to look like like below: Field_1         Field_2 engine1       Error-1 core2           Thread-5 main3          Token-2 Job4             Thread-1 Any feedback and help would greatly appreciated. Thanks
I created the identity it got created when i am trying to create a connection getting below error   Login failed. The login is from an untrusted domain and cannot be used with integrated authenti... See more...
I created the identity it got created when i am trying to create a connection getting below error   Login failed. The login is from an untrusted domain and cannot be used with integrated authentication.
We cannot thank our Splunk Community enough for making Splunk University and .conf23 such an inspiring and exhilarating experience! Using insights and feedback from our customers and our community, w... See more...
We cannot thank our Splunk Community enough for making Splunk University and .conf23 such an inspiring and exhilarating experience! Using insights and feedback from our customers and our community, we worked hard to make these events valuable and memorable for you.     School was in Session with Splunk University  For those new to the Splunk Community, Splunk University has become an iconic precursor to .conf, and its growth and success is no accident. Our meticulously designed 1-, 2-, and 3-day bootcamps, along with this year’s new half-day experience, are a testament to the dedication and hard work our team puts in every day, year after year.    Bootcamps and More This year, our Splunk Education instructors delivered 26 bootcamps to almost 1,450 students during Splunk University. The professional, enthusiastic, and distinctly Splunky approach we bring to Splunk University has not only helped our customers use Splunk more effectively and strategically, but is one of the best ways we collect feedback and ideas from our community – and integrate this into our current and future education curriculum.    A Learning Journey To create a sense of cohesion, content from Splunk University was the red thread woven through many of the other experiences at .conf23. If you attended, we hope you had a chance to see us in the Success Zone at the Source=*Pavilion, could attend one of our packed Hands-on Labs sessions, or met us hanging out at the Splunk Community Lounge. We hope you felt the same sense of unity among our team and the community that we did.      We'd like to express our gratitude to our amazing team, our partners, and the entire Splunk Community for making this year's Splunk University and .conf23 an unforgettable experience. Together, we continue to empower Splunk enthusiasts worldwide, equipping them with the skills and knowledge to tackle their biggest data challenges with confidence. Find out more by going to the Splunk Training and Certification website.    Stay Splunky and keep exploring the limitless possibilities of Splunk!   Callie Skokos on Behalf of the Splunk Education Crew
Research published by Cisco AppDynamics highlights complexities IT teams face when implementing cloud native technologies with on-prem applications Based on research, including a survey of ove... See more...
Research published by Cisco AppDynamics highlights complexities IT teams face when implementing cloud native technologies with on-prem applications Based on research, including a survey of over 1,000 IT professionals, this report discusses: The forces behind the shift to hybrid, and how these environments create new monitoring challenges The reasoning behind application observability as a strategic priority for most organiations How your peers on various IT teams view application observability, and how they're approaching it in the near- and long-term Sign up to get the report. Additional resources Application Observability: A critical priority to optimize application performance and accelerate innovation (by Ronak Desai) describes how these complexities impact user experience, as well as IT organization health—to say nothing of optimizing performance and availability. Find it on the Blog  
I am working on securing all of the communications to my Splunk Enterprise servers. Looking at the output of "splunk btool server list" there is a section named "distributed_leases" that has "sslVer... See more...
I am working on securing all of the communications to my Splunk Enterprise servers. Looking at the output of "splunk btool server list" there is a section named "distributed_leases" that has "sslVerifyServerCert=false" set (as given in etc/system/default/server.conf). On the one hand, it also has "disabled=true" so I should be able to set it however I want and see no effects. On the other hand, I don't prefer to tweak settings without knowing what they belong to. Looking at the spec file revealed no more than a trivial answer about securing the connection.  Searching on docs.splunk.com turned up nothing except for the server.conf.spec file that I already had. Does anyone know what "distributed_leases" is used for?
Hi, I need help! I have this query. Ticket_Encryption_Type=0x17 Account_Domain="ad.contoso.com" but I need, pull all the Service name in a list. how can I do that? thanks   07/26/2023 12:... See more...
Hi, I need help! I have this query. Ticket_Encryption_Type=0x17 Account_Domain="ad.contoso.com" but I need, pull all the Service name in a list. how can I do that? thanks   07/26/2023 12:31:30 PM LogName=Security EventCode=4769 EventType=0 ComputerName=a-dc1.ad.contoso.com SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=1125639551 Keywords=Audit Success TaskCategory=Kerberos Service Ticket Operations OpCode=Info Message=A Kerberos service ticket was requested.   Account Information: Account Name: team_explorer@ad.contoso.com Account Domain: ad.contoso.com Logon GUID: {XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}   Service Information: Service Name: NAME-ONE$ Service ID: S-X-X-XX-XXXXXXXXX-XXXXXXXXXX-XXXXXXXXX-XXXXX   Network Information: Client Address: ::ffff:192.168.8.49 Client Port: 41365   Additional Information: Ticket Options: 0x810000 Ticket Encryption Type: 0x17 Failure Code: 0x0 Transited Services: -
I am trying to make splunk less noisy and filter out some of our 4688 events.  I have tried to use the Ingest Action on our Indexer but when I select our sourcetype "WMI:WinEventLog:Security"  none o... See more...
I am trying to make splunk less noisy and filter out some of our 4688 events.  I have tried to use the Ingest Action on our Indexer but when I select our sourcetype "WMI:WinEventLog:Security"  none of the data appears when I pull a sample.  If I run a straight search query I can see the data and filter down to the correct events I want to filter out.  Is there something I'm missing?
Hi aLL, I'm trying to find the solution for the blocked queue.      Thanks...
Hello splunkers, i have a simple timechart query for avg USED_SPACE of disks for last 4 days  index=abc sourectype=disk_data |timechart span=24h avg(USED_SPACE) by DISK limit=10 usenull=f useothe... See more...
Hello splunkers, i have a simple timechart query for avg USED_SPACE of disks for last 4 days  index=abc sourectype=disk_data |timechart span=24h avg(USED_SPACE) by DISK limit=10 usenull=f useother=f below is the result  TIME DATApoint MADOD MA_ODP MA_PD MA_PI MA_PSD MA_PSI MA_P_DBCI MA_P_DBIN MA_P_DCRED MA_P_DEMS MA_P_DFDB MA_P_DFDMS MA_P_DFDP 22/07/2023 01:00 476054.33 39856.33 20105.25 106848.78 23032.06 703847.03 74755.98 20326.13 28006.5 20959.31 21400.05 21921.19 76224.02 83074.48 23/07/2023 01:00 476054.33 39864.33 20104.81 106848.78 23032.06 703847.03 74755.98 20326.13 28006.5 20959.31 21400.05 21921.19 76224.02 83074.48 24/07/2023 01:00 476166.33 39872.33 20105.25 106855.78 23032.06 703847.03 74755.98 20326.13 28022.5 20975.31 21400.05 21937.19 76228.02 83074.48 25/07/2023 01:00 476238.33 39880.33 20105.19 106862.78 23032.06 703851.03 74757.05 20326.13 28038.5 20975.31 21400.05 21953.19 76232.02 83074.48   My question is i understand DATApoint Disk has highest value so its first column but why is MADOD column in 2nd place its should be on 7th or 8th  column as other disk have higher used space and if ilimit to 5  it gives me wrong info because MA_PSD should be in 1st column and its in 7th  any suggestion how can i rearrange the columns based on used_space after timechart i'm expecting something like below  TIME MA_PSD DATApoint MA_PD MA_P_DFDP MA_P_DFDMS MA_PSI MADOD MA_P_DBIN MA_P_DFDB MA_P_DEMS MA_P_DCRED MA_P_DBCI MA_PI MA_ODP   22/07/2023 01:00 703847.03 476054.3 106848.8 83074.48 76224.02 74755.98 39856.33 28006.5 21921.2 21400.05 20959.31 20326.13 23032.06 20105.25   23/07/2023 01:00 703847.03 476054.3 106848.8 83074.48 76224.02 74755.98 39864.33 28006.5 21921.2 21400.05 20959.31 20326.13 23032.06 20104.81   24/07/2023 01:00 703847.03 476166.3 106855.8 83074.48 76228.02 74755.98 39872.33 28022.5 21937.2 21400.05 20975.31 20326.13 23032.06 20105.25   25/07/2023 01:00 703851.03 476238.3 106862.8 83074.48 76232.02 74757.05 39880.33 28038.5 21953.2 21400.05 20975.31 20326.13 23032.06 20105.19    
The quiz for EDU-1001 Data Models needs to be reviewed by Splunk. There is one question that cannot be answered correctly because it requires a multiple choice selection and the UI uses radio button... See more...
The quiz for EDU-1001 Data Models needs to be reviewed by Splunk. There is one question that cannot be answered correctly because it requires a multiple choice selection and the UI uses radio buttons. The directions even state "select all answers that apply". For some reason I had to take this quiz many times to achieve a passing grade. That has not been the case with the other 8 quizzes I have taken on the Certified Power User track, where I have usually passed on the first try. It could just be me, but I found a lot of the language to be confusing and the answers didn't appear to be exactly correct, especially regarding the difference between a Data Model and Dataset. The quiz would be more effective as a personal assessment tool if it would report which answers are incorrect. The quizzes for some of the other classes do exhibit that behavior and they are far more useful for determining knowledge gaps. This course also lacks a feedback widget. It seems like I've seen that on some of the other courses. Thanks!
Hello All,  I'm trying to run query that will allow me to exclude events with part of a file path built in a windows event.  The event looks like this alert: %PlaceStuffIsStored%\ANYTHING\STUFF\And\... See more...
Hello All,  I'm trying to run query that will allow me to exclude events with part of a file path built in a windows event.  The event looks like this alert: %PlaceStuffIsStored%\ANYTHING\STUFF\And\Things\Blah\Blah\Blah\Blah.EXE was allowed to run but would have been prevented from running if the AppLocker policy were enforced. I was able to Rex out the file path to now I have a new field called path which is Dir Dir=%PlaceStuffIsStored%\ANYTHING\STUFF\And\Things\Blah\Blah\Blah\Blah.EXE I've tried the basics but no luck  Search!="*ANYTHING\STUFF\*" |mysearch  Not (Dir="*ANYTHING\STUFF\*")   Thanks